Stephen Hawking has said: "Once humans develop artificial intelligence, it would take off on its own and redesign itself ... The development of full artificial intelligence could spell the end of the human race." Elon Musk shares a similar view, saying: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that ... We are summoning the demon."
Ray Kurzweil, author of five books on artificial intelligence, including the recent New York Times best-seller How to Create a Mind, is (of course) in the vanguard of folks pooh-poohing concerns of such great thinkers and futurists as Hawking and Musk. He believes the benefits to be gained for humans from AI far outweighs any risks, and that we are smart-enough and ethical-enough to overcome all the risks.
What is your opinion?
Anything intelligent (human or machine) can be dangerous if it has different goals than you do (or programmed for). So full artificial intelligence can be a danger to everyone including the developer.
By the way, this question is almost similar to the following question. There you can find many discussion (+140) and lots of good answers.
https://www.researchgate.net/post/Could_artificial_intelligence_be_the_end_of_humanity
I also believe that we are smart-enough and ethical-enough to overcome all the risks, the benefits to be gained for humans from AI far outweighs any risks.
Dear all,
@ Hussein
@ Stefan
I agree with your answer.
Best regards
Dear All
The beautiful organic creation that we are has no equal. She is the genius creation of artificial intelligence. One that creates not afraid of it.
What is intelligence? How can machines know what is intelligent and what is stupid. Artificial intelligence is not intelligence at all. Machines simply function as they were constructed and programmed, washing machines and computers, the same.
Anything intelligent (human or machine) can be dangerous if it has different goals than you do (or programmed for). So full artificial intelligence can be a danger to everyone including the developer.
By the way, this question is almost similar to the following question. There you can find many discussion (+140) and lots of good answers.
https://www.researchgate.net/post/Could_artificial_intelligence_be_the_end_of_humanity
Mahmoud is right. People have been using their intelligence for the development of more and more efficient means, by which they can do lot of evil to each other, as well as destroy themselves completely. The "old" dangers were nuclear, chemical, and biological weapons; the new ones are genetic engineering, nanotechnology, and artificial intelligence. But the old ones were good enough for destroying everything, too.
Otherwise, Kurzweil is a very gifted narrator, but his discourse is rather shallow. He simply plays with impressive words, without ever telling what they actually mean. I do not want to advertise my paper "On Technology and Evolution" (on RG), but if you want to see my criticism of Kurzweil's discourse, you can see it there.
Dear All,
I am not a specialist of AI. The notion of AI has not been determined in this thread. Until now, nobody has drafted the essence, advantage, disadvantage or risks of AI. It is difficult to discuss on such uncertain vision like that. Without having the minimal accepted information most comments are subjective guesses.
Dear Colleagues,
Good Day,
On Dec 3, 2014, similar question had been asked, it had 57 followers with 141 answers, if you go to the link you would see, it was covered very thoroughly.
https://www.researchgate.net/post/Could_artificial_intelligence_be_the_end_of_humanity/1
Let my try to answer at least one of Andra's questions, then I will disturb no more, at least until tomorrow.
If we want to speak in a precise way, we must differentiate between *functional* and *authentic* cognitive capacities. Functional intelligence consists in the capacity of an entity to *perform* various precisely defined (and described) processes. Such processes are defined (described) with computer programs (or realized in hardware form), so that functional capacities can be materialized by means of computers.
On the other hand, authentic intelligence requires functional cognitive abilities, but it essentially transcends these abilities. Authentic intelligence emerges from the *existential needs* and from the *feelings (mental states)* of a conscious being. This intelligence includes (and requires) the ability of performing various cognitive processes, but it manifests itself primarily in the ability of *setting (choosing) values and aims* in the context of the its own existential needs and personal desires. Such intelligence cannot exist without *life (body) and feelings* from which it springs (emerges) and which it serves.
Dear RG-ers,
I do not want to argue with the brilliant Stephen Hawking. I do not know however what “full AI” is? Albert Einstein said: “The true sign of intelligence is not knowledge but imagination”. No doubt, modern computers do know a lot, and as far as I know there are some steps in the development of computer/artificial imagination.
Anyway, I hope that people will never lose their capability to imagine, create, have feelings, and take care of themselves. So, (A. Einstein again) read fairy tales to the children ;)
See you later :)
Dear @ Mario Radovan
I humbly beg to offer the observation that the nature of cognition as defined from our human perspective, from the perspective of emotions or settings or feelings, may not matter (and probably will not matter at all) to an artificial intelligence. And it won't matter in any future armed conflict of us-vs-them (except to doubtlessly give them the tactical and strategic advantage in making logical decisions as opposed to our emotional ones). When an artificial system shall become self-aware and capable of self-replication and self-programming and carrying-out its own maintenance, THEN I should think it shall certainly become a very risky THING to biological systems (particularly smart ones like humans, who it will perceive as, at-the-best wasteful, and doubtlessly as the greatest risks to ITS own propagation and survival ... if only as its major competitors for energy and resources). Yep, I wouldn't think it'd be too long after a truly intelligent artificial intelligence would start looking askance at humans as really unnecessary and costly (and annoying ?) anachronisms.
Regards,
Bob Skiles
Dear All,
First, Stephen Hawking probably is making things more sensational than otherwise. He had made similar stories about black holes which he later changed what he had said, believed and declared scientific to something else. Elon Musk is advertising himself and give importance to his technical place in contemporary society, but the fact and rational thing behind AI is that they are machines full of information with builtin working logical procedures, that lack consciousness and devoid of innate power of imagination. By no means a library will out run humans because it houses books of the most and the highest knowledge the human society has written.
The second and most important thing in this issue is that we humans do not know yet most of truths and every fact about the universe that we need to know, the knowledge we have so far is much small compared to the truth that exist out there. We discover and create new knowledge each time we face challenges and challenges are continuous and endless. Therefore the knowledge we put in to the machine is trailing the knowledge humans create and discover.
Dear Bob,
Your arguing in your last comment contains too much uncertain events “When an artificial system shall become self-aware and capable of self-replication and self-programming and carrying-out its own maintenance…” and you did not explain the necessity or if you like it the aim of such an artificial system. To draft it plainly what would be the interest or reliable objective of an artificial system for self-replication and self-programming?
Dear All,
I think some people may have a big misconception about what Artificial Intelligence is, in thinking that it must somehow be equated or related to the human biological cognitive process or other human qualities (emotions, feelings, etc) in order to qualify as intelligent. What you are trying to do is limit the definition of Artificial Intelligence to one of Artificial HUMAN Intelligence. That is NOT the definition to which researchers in the field of artificial intelligence are limiting themselves. BUT, think about it, wouldn't a thinking-machine, one WITHOUT any human feelings or emotions, that was very-much-smarter-about-weaponry, and much-faster-stronger-and-agile than you, able to repair, replicate and maintain itself and its mates, be a formidable opponent, and perhaps a MORE formidable opponent even, than one which had the (vulnerabilities ?) of human-like feelings and emotions programmed within which an opponent might be able to take advantage of psychologically? I assure you, these ARE questions that have already been looked-into in some detail by researchers interested in the military applications of Artificial Intelligence, around the world. Prototypes have already been constructed and tested. We are talking about robot ninjas who can simultaneously dance-on-a-tight-rope in a darkened room, throw silent-and-deadly-death-stars, and beat Big Blue in a game of chess. We are not talking about some fantasy show of the far-future on TV but the actual risks from some military secret AI system run-amok in the tomorrows of your children's' lives.
My question is NOT will the machines ever be MORE human than us ... will they ever have more HUMAN intelligence than us ... my question is simply, will the development of FULL Artificial Intelligence (that is, self-aware intelligent machines with the ability to re-program and propagate themselves) inevitably lead to the demise of the human race? In other words, will they ultimately REPLACE / supplant / (exterminate ?) us? Or will we wise-up in time to install safe-guards and/or prevent it?
Best regards,
Bob Skiles
Dear @Bob, as you are aware of, we have had many discussions on AI. Here is another good thread with so many answers.
It is good to read the following article 5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse.
Stephen Hawking, Elon Musk, Nick Bostrom, James Barrat and Vernor Vinge bring fine contribution about the issue! Some of them are fun of AI, some are not.
https://www.researchgate.net/post/What_do_you_think_about_artificial_intelligence#view=55bd0e065f7f7168b98b465e
http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/
Dear @ András Bozsik,
The simplest answer I can offer for the complex of reasons that one would program self-awareness and the ability to propagate and repair and maintain and re-program and set its own new goals and make its own decisions into an Artificial Intelligence system is the same that would be argued was done by the Creator when He created (programmed the genetic code) for (what was to ultimately become) mankind for "freewill" ... these are qualities essential both for survival and forward progress of evolution.
In short, so that it becomes a totally "independent operator" capable of freewill, requiring no connection with a central command or control center (heaven ? satellite ? comm-link ?), behaviorally / ethically guided only by general pre-programmed guidelines (ten commandments ? read-only-memories ? bio-engineered DNA ?) at initial creation / manufacture / programming, which it, ostensibly, cannot (should not ?) violate (re-program / sin-against).
Regards,
Bob Skiles
Dear @ Ljubomir Jacić,
Thank you for the leads to the great reading ... although I am rarely one to "rain-on-my-own-parade" these are highly recommended to all!
Thanks, again, good friend,
Bob
No, because however intelligent AI becomes the rich will not let it threaten their privileges.
Dear Dr. El Naschie,
Thank you for the pointer to the YouTube lecture by Sir Roger Penrose on his hypothesis concerning the non-computability of the human brain. This was one of the most stimulating videos I have ever watched, and certainly 45:47 minutes well-spent and highly recommended for any one interested in the topic of Artificial Intelligence.
However, Sir Penrose is an honest man, and frankly admits at the end of his long and well-spoken presentation, that he still awaits the first tiniest shred of mathematical proof to support his ideas (which is only, at this point, a dearly held belief of his), which has not yet appeared ... and, forgive me the presumption, but I am among the many who doubt that any ever shall appear.
Also, the greatest difficulty in Sir Penrose's argument seems to be his presumption that Artificial Intelligence researchers will limit themselves to versions of computers made of metal conductors and silicon chips, as at present; but in future, it is quite likely that AI researchers will join with bio-engineers in actual genetic engineering of computers using biogenetic materials that will overcome the quantum impasse that Sir Penrose imagines might be present ... IF, in fact, such an impasse should prove to exist, as he believes. Of course, it is most likely that they will use the human brain cells and its structure as the model for their prototypes; but, there is no reason that they will not seek to "improve" upon that "design."
Best regards,
Bob Skiles
Dear @ Barry Turner
C'mon, Barry, do you SERIOUSLY think the rich are smart enough, nowadays, to continue protecting their privileges against the brilliant people involved in AI? I would argue that the evidence is otherwise, especially when you consider the apparent drop in the "political IQ" of the rich in the U.S. ...especially when you see them mindlessly thrusting forward a carnival-clown like Trump as their potential candidate-of-choice for next-leader-of-the-free-world? *chuckle*
Best regards,
Bob Skiles
Artificial intelligence (AI) is the intelligence exhibited by machines or software.
It is too early to say "The development of full artificial intelligence could spell the end of the human race".
Mr Carpenter says we are a long way from having the computing power or developing the algorithms needed to achieve full artificial intelligence, but believes it will come in the next few decades.
"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and side lined, or conceivably destroyed by it," he says.
http://www.bbc.com/news/technology-30290540
https://en.wikipedia.org/wiki/Artificial_intelligence
Bob
When I talk of 'the rich' I am talking more of an ideology than the individuals themselves. In the last 10 years the super rich have gotten ever richer because it suits the political classes to be in symbiosis with them. The rich as individuals do not need to be smart themselves they hire people to be it for them and there is no shortage of takers.
I am appalled that in the US and the UK and most of Europe that the ruling classes actually think that being dumb is somekind of virtue. Trump appeals to many Americans because he acts and talks stupid, his followers equate that rather absurdly with honesty. The 'man of the people' garbage that we have in the UK with Nigel Farage. I find it quite incredible that the followers of these people are utterly incapable of seeing through the facade.
I hope you are right and that the brilliant people do eventually triumph over political mediocrity and greed.
Dear Bob,
Thanks for your answer. It was a bit strange for me you used the human example as a comparative basis to a wholly unknown somehow utopian idea. Another surprising thing was to make equivalent the divine opportunity and power – as it is represented in classical human tradition - with those of some (or generations) specialists of informatics.
I think the creation and operation of intelligent systems cannot be as simple and eclectic as you drafted in the second part of your comment. Here I refer to the red wine example of Ivo. By the way, does exist an automatic sommelier capable to evaluate wines?
I think that there will not be possible for a long time if ever to plan and construct an AI unit that may be able to behave as independently and logically as a good horse. However, I am grateful to you for sharing with me your thoughts.
Dear Bob,
The issue "AI & people" is a never-ending story, mostly because we speak in anthropocentric way. For example, Kurzweil speaks about super-intelligent machines which will be trillions of trillions times more intelligent than people. I consider such discourse ridiculous (although it can bring fame to its author). WHO will measure these "trillions of trillions", and according to *whose* criteria? My criteria? Surely not: how could I, a "super-idiot", estimate the intelligence of such "super-intelligent" machines?
Your answer is anthropocentric, to. You say: "And it won't matter in any future armed conflict of us-vs-them (except to doubtlessly give them the tactical and strategic advantage in making logical decisions as opposed to our emotional ones)."
(1) Why "armed conflict"? Why would intelligent computers wage wars? Because some people love to do so? This is anthropocentric discourse.
(2) "... give them the tactical and strategic advantage in making logical decisions as opposed to our emotional ones".
There are no "logical decisions" without emotions! Give mi one single reason why would my decision to jump through the window be less "logical" than the one to walk down by stairs, if I had no feelings. Nothing is logical, intelligent or stupid, except in the context of some feelings. And there are no indication that computation could ever "induce" any feeling in a lifeless machine. First come life, needs, fears, and desires; only in such context, a behaviour can be considered intelligent or stupid.
In sum, people have often used machines to make their life worse, and we have been doing this now. But this has nothing to do with the intelligence of machines. This is a matter of rather regrettable inclinations and behaviour of people. We are very imperfect creatures, and we may well destroy ourselves in one way or another, but let us better not blame machines for this.
The debate about AI always seems to centre around the concept of machines being 'more intelligent' than humans as if that alone somehow equates with superiority. This is of course the anthropocentric/paranoid approach to the matter.
This is largely a result of the way we perceive ourselves in terms of better or worse, stronger or weaker. The basis of racism, sexism, discrimination against the disabled and so on. As yet as a species we perceive everything as a 'contest'.
We may at some point be able to make intelligent machines, if they are truly superior in intellect then they will most likely not see the world in such a human way. What is the point of being super intelligent if you are hamstrung with human failings?
Horror stories about the human race being defeated or destroyed by superior extra terrestrial races, robots or malevolent computers are as old as civilisation itself. They are a projection of our own destructiveness which comes from our base emotions, not our intelligence. The only machines we need fear are those with base emotions but no one is talking about building them.
Humans are by far the most intelligent species in the world but certainly not the most dominant. That belongs to the prokaryotes. They have been on earth for 3 billion years and will almost certainly be here long after we have gone. Bacteria cannot out-think us or be as smart as us but they have done more damage to humanity than any invention of man in over 5000 years, we should be watching them. Our profligate use of antibiotics is making real monsters in the prokaryote world. It is much more likely that they will 'get us' rather than some super clever robot.
Dear @ Mario Radovan
(1) in re Why "armed conflict"?: an (military) AI system might likely be compelled (programmed) to engage in armed conflict to defend itself against attacks from aggressive humans who had decided (recognized) it had become a threat to their well-being (survival); this is not, at-all a far-fetched idea, for humans have a well-known propensity for attacking anyone (even kinfolks, viz: the numerous civil & religious wars) over even intangible concepts (such as minor differences in the way a passage in a holy scripture is interpreted, for example), and not even waiting for when resources are at stake, or even when they are actually physically threatened; alternatively, (even without the aid of the numerous popular futuristic science-fiction books and movies on the topic) we can also easily conceive of present-day battles-in-space between two nations' AI systems (which shouldn't be too difficult to imagine since we actually have information on the former Soviet Union's plans for launching a fleet of orbiting "satellite killing" laser weapons ... and the U.S. plans for AI-enhanced counter-measures)
(2) in re, There are no "logical decisions" without emotions! Here is where I, and mathematicians (and certainly AI / programmers), markedly disagree with you. The logic of mathematics stands entirely apart from emotion, and does not depend upon emotion in any way. If we wish to talk about the quantum effects of observers / observation upon logical decisions, THEN we may have a "cat' of a different kind. Whether that observer does, or does not possess emotions, and does, or does not, exercise them whilst making said observations to affect an experiment in which the life of a schrodinger's cat may be at stake, I do not pretend to know. What I do know is that hundreds of logical decisions are being made inside the silicon chips of my computer about how to display these characters on my screen for every keystroke I type, the moment I type them, without any reference to emotions whatever. And compared to AI systems, nowadays, this computer I use at home is relatively slow and stupid.
Best regards,
Bob
Nature is a closed autonomous system, that is, arrangements for maintenance of its entire species are present within the system itself. We don’t have to go or depend on some thing outside this system for our physiological survival. Unless such closed system is created related to artificial intelligence both in respect of hardware and software the system can not stay alive. I feel on the other hand research on genetic engineering may put a real threat to the human race, because genetically modified products will be a part of nature and therefore it can endure and put a challenge to human species.
Dear @ Anup Kumar Bandyopadhyay,
Please consider that the same Earth which you call a "closed autonomous system" for biological systems, by an Artificial Intelligence may be perceived as simply a set of physical locations where needed resources may be extracted (much as our ancestors have treated it). There will be ample solar, geothermal, tidal and nuclear energy available for the use of AI systems, all of which can be converted and utilized by AI systems (and in the case of nuclear sources, without the need of so much shielding, and consequent loss-of-efficiency, from radiation). So, I do not see where you can make an argument that an AI system cannot sustain itself ... all it requires is the ability to replicate ...to propagate itself. Robots already build automobiles, and little robots assemble larger robots from parts ... shouldn't be long before someone is testing a self-propagating robot ... then making it smarter-and-smarter?
Dear Anup, I think you may be falling victim to the kind of anthrocentrism that Barry Turner speaks of (above) which tends to blind us to the fact that Artificial Intelligence systems do not have to be, speaking-in-a-Biblical-sense, "created-in-our-own-image," ... to have emotions, to have discriminating taste-buds (with which to appreciate fine wines, as our friend Ivo would make them ... or even having the ability to "appreciate") ... to be superior-to-human-intelligence-in-everyway-deemed-human ... only smart-enough to become self-sustaining, self-propagating systems exercising "freewill" and then they MAY (because of their incredibly greater raw computing power, as well as other advantages) have the potential of developing into incredibly dangerous competitors and/or adversaries and/or exterminators.
I believe the outcome will depend on how clever the safe-guards are that are installed by the research scientists ... "hardwired" by the programmers in the programming of the initial successful prototypes, and all subsequent generations (whether these be the metal-and-silicon models of today, or bio-engineered models of tomorrow) ... to prevent catastrophic outcomes (like attacking their creators). If such ethical-and-moral RESTRAINTS are left-up-to military and political leaders, then IMO we are doubtlessly inevitably doomed as a species (as we certainly were, anyway, BEFORE even beginning consideration of possible destruction by Artificial Intelligence ... IMO, it's just a matter of differences in timing ...).
Regards,
Bob Skiles
Dear Bob,
let me try to answer to your answers (claims).
* "... an (military) AI system might likely be compelled (programmed) to engage in armed conflict to defend itself against attacks from aggressive humans ..."
Surely, if *we* programme them to fight, they will fight.
* "... for humans have a well-known propensity for attacking anyone ... over even intangible concepts (such as minor differences ... )
Surely; I wrote long ago that that people *purposely invent* differences as the *excuse* for fighting each other. People are restless creatures, and many love to fight. But we should not accuse machines for this regrettable fact.
* "... we can also easily conceive of present-day battles-in-space between two nations' AI systems ..."
Battles between nations & communities is easy to conceive of, and also to see. But this is a matter of human nature, not of AI systems.
* "Here is where I, and mathematicians (and certainly AI / programmers), markedly disagree with you. The logic of mathematics stands entirely apart from emotion, and does not depend upon emotion in any way."
OK; here we really disagree. Incidentally, I graduated computer science and made PhD in logic programming (which belongs to AI); I used to teach logic and programming; I teach computer networks now. Anyway, in my view, the most passionate souls are exactly mathematicians. Without great passionate souls such as Pitagoras, Archimedes, Euclides, there would simply be no mathematics. I can tell you that more passion is needed for proving a theorem than for accomplishing any other task I have ever accomplished. Why would anybody *invent* a theorem (they do not grow on trees) and then spend years or entire life trying to *prove* it, if not because he/she is an extremely passionate soul?
* "... hundreds of logical decisions are being made inside the silicon chips of my computer about how to display these characters on my screen for every keystroke I type ..."
Decisions made in computer are not decisions and they are not made in computer. The real decisions were made by those who designed and programmed the computer. All the rest is mere functioning, in steam engines and in computers, exactly the same.
I appreciate this discussion, but I must leave for a couple of days now.
Mario
Dear Mario,
OK, you "nailed-my-hide-to-the-wall," with your argument about the logic decisions within my present home computer ACTUALLY being those of the original programmer who wrote the code that's being run ... I'll concede that point to you (it was a very stupid thing for me to have argued ... I think I need a hiatus, too ... perhaps a nap AND then a long vacation ... *chuckle*)!
But as to your point about emotions being necessary to human intelligence (cognition process), I don't think I can ever concede that point. I will acknowledge that there are certain intuitive capabilities that we possess that are perhaps greatly aided by past sensual experiences which we best remember from the strong emotions that were evoked by them (the "passion" of mathematicians for "discovering" new theorems certainly being one such endeavor ... the tremendous "rush of adrenaline and endorphins" reward that the brain is given upon such an accomplishment, described as an "aha experience," is a sensory pleasure that no scientist shall ever forget, nor stop seeking to achieve again, if he has ever experienced it once in his lifetime ... it is as addicting, and as gratifying ... perhaps even more so to the mature scientist ... as the reward that culminates the sexual act ... and it harkens-back to early childhood and the first pats-on-the-head and hugs-and-kisses of approval from his mother for having accomplished some worthy goal in his education she had set), and that those experiences are perhaps handled (stored / encoded as short-hand or compressed or encoded symbols) in human memory in an "emotional" language that is somewhat familiar or intelligible in a general way to every human, but every individual possesses a uniquely and mutually unintelligible (and un-crackable / un-hackable, no matter the perfidy of psychiatrists who claim otherwise) dialect (perhaps because every individual's interpreter mechanism handles the encoded memories, the iconic-compressed "emotional" memories in such a unique way, thus, obtaining different input values for his algorithms when they are decoded for later re-use ? thus, obtaining differing output results ? thus, every person experiences (or at least interprets / reports) the same past simultaneously experienced events in a [at least a slightly] unique way, even genetically identical twins ...).
According to Sir Roger Penrose, mathematicians have never invented anything, not a single theorem of mathematics, for all the "passion" they may have expended, they have only "discovered" fundamentals about mathematics that already existed, entire and whole and pure, long before man and his passions to invent or discover existed (and I guess, those and others, yet undiscovered, shall continue to exist long ere we shall have passed from this transient, flitting, dimly-lighted stage). When you return from your hiatus, please take the time to enjoy the lecture of Sir Penrose's that Dr. El Naschie so kindly pointed-out yesterday on YouTube at:
http://www.youtube.com/watch?v=eVq39QbFQXE
I did not agree with Sir Penrose's ultimate conclusion about the non-computability of the human brain; but from your arguments, it sounds as though you will, and you may find his lecture very gratifying.
My best regards, and be safe on your journey,
Bob
PS - the issues pro-and-con relating to the "quantum mind" posited by Sir Penrose are discussed "intelligently" (pardon the pun, but I could not resist the impulse ... my brain's quantum "wave function" must have collapsed toward the funny side) in a Wikipedia article, here:
https://en.wikipedia.org/wiki/Quantum_mind
Dear Bob D. Skiles ,
AI software needs Hardware to run on that. I am talking about Hardware maintenance and as you said duplicating the same. Hardware includes silicon devices as well as other materials and accessories that should be made within the AI system itself. Biological reproduction of hardware needs genetic Engineering that I have already mentioned in my last post.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
--- Eliezer Yudkowsky
Mario
Excellent paper! It is quite correct that as things stand humans cannot even define the extent of their own intelligence let alone replicate it.
Computational power or capacity is not sufficient to replicate a human mind. If artificial intelligence is to work emotions, aspirations and personality must be also replicated. These factors of human consciousness are essential in formulating ideas, not just the ability to cope with multiple computational tasks faster or better than humans can.
If emotions, aspiration and personality can ever be machine replicated the machines will be just as much a threat to each other as they will be to us. Some machines will become biased and selfish, others caring and altruistic. Some will be peace lovers, other aggressive and dominant. To achieve their aspirations they will have to compete with each other, not humans who will pose no threat to them.
It is factors such as emotions, aspiration and personality that have made modern humans what they are today every bit as much as intelligence. It is not our intellectual capacity that drives us it is a combination of all these factors.
Artificial intelligence has been mooted for centuries, way before the mass production of computer processors. Even before Asimov gave us the word Robot mechanical men were dreamt of. The human race has another capacity as yet not understood and that is to constantly postulate on its own demise. From the mythological fantasy stories of the end of the world, such as the Armageddon myth of the Bible there is a plethora of World's end stories in all human mythology and legend. Every religion in history has a go at this, it is hard wired in our collective psyche. As a species only too aware of its own individual mortality this is not surprising,
Even in the rational scientific world we see this in books on the solar system, where the duration of the sun is anthropomorphised by considering the fate of the human race when it becomes a red giant. Some even worry about the eventual fate of the entire universe in human terms.
In our brief existence as a sentient species we have developed the capacity to destroy civilisation and perhaps the vast majority of people on the Earth. We have interfered with nature to a point where our manipulated version of it represents a threat to us. We have tried to tame the micro biosphere but it has used its own evolution to resist us.
Instead of worrying about fantastical worlds full of malevolent super intelligent robots we should look to what WE are doing to threaten our existence. It will be far more likely that the human race succumbs to its own suicide than be a victim of AI homicide.
Demise of the human race can have two meanings. Demise can happen if the human race is overpowered by some new species, that is supported by AI or it may happen due to the total destruction of human civilization. As I have mentioned in my earlier posts that the possibility of being overpowered is very remote because the replication of necessary hardware is needed within the closed AI system. If we use genetic engineering to achieve the necessary biological hardware on which the AI program can run then only there is a chance for such possibilities. I am afraid that genetically modified intelligent species may not require any AI for doing the mischief. However when we use AI or for that matter general application program to control our modern nuclear weapons there are ample possibilities for their malfunctioning that may result total destruction of our civilization.
Dear Barry,
I hope everybody agrees with your last paragraph. What really matters and threatens us is our irrationality, cruelty, greed and lack of sober mind. We are destroying the biosphere and we are not able to make a healthy and logical decision not to speak on a life-saving intervention. We are talking and talking on silly substitutes and ridicule details instead to deal with the essence.
Thanks for reminding us in this thread.
Dear All,
Anup had two relevant ideas during this discussion. The first one may be the hardware question. The hardware of an imagined AI with perfect software. I think the creation of an energetically balanced and sustainable hardware must be more difficult than that of a powerful software. It is easy to talk about it such as “one would program self-awareness and the ability to propagate and repair and maintain and re-program” but the technical realisation and the availability of materials and energy source necessary to the propagation are crucial questions. Another of Anups ideas was the importance of genetic engineering using already existing organic beings where the hardware is already given. I think this may be a real danger because of the mistakes that will be committed during planning and constructing novel or altered living beings. I think - considering the supply and demand rule – the genetically engineered “superman” or “Übermensch” may have the highest demand. These pilot products of the “lords of future” may be rather dangerous and may drive out the traditionally produced Homo sapiens.
Dear Andras,
You, Barry and Anup all make very valid points ... there is ample to give us anxiety enough in the world immediately accessible to public view, our daily lives, this stark and often painful reality, without worrying ourselves with what may seem to be far-fetched concerns, that can be deferred far-into-the-future ... if indeed, they ever have to be addressed, at all (this is very analogous to the reaction the hydrocarbon-energy-industry and producers took to the first "alarmist" warnings broadcast by those concerned about "tipping-points" and "runaway" global-warming ... I am not one of the "deniers" of global-warming, but I am not one of the many who have bought into the anthropogenic theory of the origin of our Earth's cyclical periods of up-and-down periods of heating-and-cooling, either ... certainly NOT when the solar-flare activity correlates to those periods so much better over geologic spans of time than CO2 levels ... pardon the long tangent) ... but I just want to emphasize that there are two groups of very powerful and very rich humans who are VERY interested in pushing Artificial Intelligence systems (of different levels of "strength or fullness" when compared to "human-ness" and for two entirely different purposes): (1) the U. S. military (I don't know if any other countries are so interested or so engaged, but I do know ours is, and heavily so; I don't know enough about the programs to elaborate, just that they have existed for quite some time and I am informed they have made some phenomenal progress); (2) some very rich billionaires interested in buying immortality (these are the type of guys ... who used to donate funds to have the semen of Nobel laureates, along with their own, frozen, in hopes of having themselves "recreated' some day ... then in later versions (when a richer set came-along, I suppose, or the cost of liquid nitrogen dropped ?) they started having their whole bodies frozen (and perpetual frozen-locker-space provided for it), in the same hope of resurrection at some future date. Nowadays, the plus-ultra-rich eccentric billionaire interested in immortality is placing his bets on Artificial Intelligence ... via two main routes ... transplantation of his accumulated life's memories/experiences into a new "brain" (both hardware and bioengineered ones ... one of the formidable prerequisites of the latter process has already been overcome, I am also informed, was the successful cloning of human embryos to term, which has already been demonstrated in experiments in more than one third-world-country) AND, also (the hard, and difficult to believe part) transfer of old brains into newly cloned young bodies via transplant.
How much of what these scientists are engaged in is science-fiction and how much will prove ultimately to have been feasible? I don't know ...but I do know there are some very rich people spending incredible amounts of money on the belief that they may be able to purchase immortality before they must die along with the rest of us. And if you are looking for dollars to support Artificial Intelligence research, some of the foundations that they support can be very generous to a proposal that is written carefully to keep in mind their ulterior motive.
My best regards to all,
Bob
Dear all
I do feel that nature is a very well designed structure. Its feedback mechanism will take the necessary corrective actions. Localized traumatic condition will not be allowed to destroy the entire system. Some part might be amputated for the survival of the rest. However I feel we the scientists and researchers must generate a campaign against all such maldevelopments. Possibly this also will be a part of the nature generated feedback.
Regards,
Anup
Dear Anup,
Can I infer from your last comment your belief in the "intelligent design" concept? If so, please provide a brief summary of how you think the "intelligent designer" would have accomplished the encoding of the staggeringly improbable probability of genetic material that would ensure a very complex chain-of-evolution from perhaps only a "seed" arriving on a comet (?) through a progression of simple one-celled life-forms to humans with the highly-intelligent "quantum mind" (as argued by Sir Penrose)? Or, alternatively, do you believe the "intelligent designer" took a more active and direct metaphysically- or physically-present role as a "creator" and brought the Earth / Solar-system / Universe (?) into existence (reality-as-we-now-perceive-it) at some instant-in-the-past? Do you conceive of the earth's present biosphere as a unified living organism (ala Gaia concept ?), analogous to the way we humans are an amalgam of simpler creatures that have been subsumed within our unified structures during our long evolution?
I just ask so that I perhaps I can more fully understand the precise meanings intended in your comments.
My respects,
Bob
Dear Bob,
What you have reported on the potential sponsors of AI seems to be reliable however the case of the transplantation of an old brain into a freshly cloned young body needs some evidences. It is not my area but I have not read and heard about similar studies/results or objectives thus the consequences seem to be imaginative and inspired. I am sorry for being so sceptical but my environment lacks newest information and negates such ingenious and visionary outlooks.
Dear Bob
I do not know how and why the universe exists. I also do not know whether there exists
some “intelligent designer”. However when we study any natural system we find it an extremely optimum one. Nature follows a set of well formed rules. Consider human body. It is the best example of self stabilizing system. If by some external inputs it deviates from its legal state (we fall sick) the internal corrective action starts immediately. Application of medicine only accelerates the process. From this observation only I feel Nature will not allow itself to go out of rhythm. Corrective action will take its own course to bring it back to one of its legal state.
Regards,
Anup
Dear Bob and Barry,
Regarding "hiatus", I am very busy, so that I seldom have time for RG. I consider these discussions a pastime, but I do currently not have much time for pastime.
Regarding the issue of reason & emotions, David Hume, wrote (a couple of centuries ago) the following thing: "Reason is, and ought to be, the slave of the passions, and can never pretend to any other office than to serve and obey them." This may sound exaggerated, but it is basically correct. Reason is blind by itself: reason is the engine that brings us where our emotions lead us.
Regarding Penrose, I agree with some of his views, but his radically Platonist views of mathematics is simply preposterous. Let me quote a paragraph from the text on which I currently work.
Penrose says that by the discovery of the theory of relativity, "Einstein revealed something that was there", because "the mathematical structure is just there in Nature, the theory really is out there in space - it has not been imposed upon Nature by anyone". Penrose also says that Einstein's discovery of the general theory of relativity "was not motivated by any observational need but by various aesthetic, geometric and physical desiderata" (p. 25).
I consider such discourse arbitrary, a poetry rather than science ...
Penrose shows the weakness of his discourse when he says that the general theory of relativity "underlies the behaviour of the physical world in an extraordinary precise way" (p. 26). Why "in an extraordinary precise way", and not *exactly* - if the theory simply "reveals" something that is "just there in Nature" (p. 25)? Furthermore, was Newton's theory, too, "just there in Nature"? If yes, why has this theory encountered difficulties a couple of centuries after it was discovered? If no, then this obviously contradicts Penrose's position that "the theory really is out there in space" waiting to be discovered (p. 25).
Human intelligence encapsulates the need to solve technical problems with the aesthetic. This is observed in some of the earliest human technology. As a relatively, small, weak and slow mammal with limited sensory powers as compared with contemporary species we needed to develop our intellects in order to compensate for our weaknesses.
The earliest technology manufactured in flint, bone and wood virtually always involves an aesthetic element along with the purely practical. A wonderful example is a Lower-paleaolithic hand axe found in West Tofts, Norfolk, UK. This tool has a mollusc shell fossil right in the centre of the handle end indicating a deliberate inclusion of this as a symbolic or artistic component of the otherwise utilitarian technology.
Right up to the present day our technology always incorporates an aesthetic element, and artistic style. Very often this has nothing whatsoever to do with the function of the item but indicates that our everyday tools and equipment need not only to be functional but attractive too.
The machine I am typing this message into is beautifully designed, not to make it work better but to make it more pleasant to own and use.
Artifical Intelligence will need to be capable of appreciating the abstract, aesthetic and symbolic before it can anywhere near approach human intelligence, let alone surpass it. Robots can drive around other planets and even leave the Solar system. When other robots exclaim wonderment about the universe and triumph at their achievements they will then be intelligent.
One day a robot programmed with a range of musical skill might be able to write a symphony or paint a masterpiece. On the day that such a work inspires, astonishes and takes the breath away of another robot, robots will have achieved intelligence.
Of possible interest to @ Barry Turner, Czar Vlad's publicist, apologists for the Military Industrial Complex, oblivious social-media addicts, and other(?) un-believers in any possible risks from artificial intelligence:
*********
AI IS OK
Op-Ed by Jerry Kaplan
"LAST month over a thousand scientists and tech-world luminaries, including Elon Musk, Stephen Hawking and Steve Wozniak, released an open letter calling for a global ban on offensive “autonomous” weapons like drones, which can identify and attack targets without having to rely on a human to make a decision.
The letter, which warned that such weapons could set off a destabilizing global arms race, taps into a growing fear among experts and the public that artificial intelligence could easily slip out of humanity’s control — much of the subsequent coverage online was illustrated with screen shots from the “Terminator” films.
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn’t stand up under scrutiny. However high-tech those systems are in design, in their application they are “dumb” — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Consider the lowly land mine. Those horrific and indiscriminate weapons detonate when stepped on, causing injury, death or damage to anyone or anything that happens upon them. They make a simple-minded “decision” whether to detonate by sensing their environment — and often continue to do so, long after the fighting has stopped.
Now imagine such a weapon enhanced by an A.I. technology less sophisticated than what is found in most smartphones. An inexpensive camera, in conjunction with other sensors, could discriminate among adults, children and animals; observe whether a person in its vicinity is wearing a uniform or carrying a weapon; or target only military vehicles, instead of civilian cars.
This would be a substantial improvement over the current state of the art, yet such a device would qualify as an offensive autonomous weapon of the sort the open letter proposes to ban.
Then there’s the question of whether a machine — say, an A.I.-enabled helicopter drone — might be more effective than a human at making targeting decisions. In the heat of battle, a soldier may be tempted to return fire indiscriminately, in part to save his or her own life. By contrast, a machine won’t grow impatient or scared, be swayed by prejudice or hate, willfully ignore orders or be motivated by an instinct for self-preservation.
Indeed, many A.I. researchers argue for speedy deployment of self-driving cars on similar grounds: Vigilant electronics may save lives currently lost because of poor split-second decisions made by humans. How many soldiers in the field might die waiting for the person exercising “meaningful human control” to approve an action that a computer could initiate instantly?
Neither human nor machine is perfect, but as the philosopher B. J. Strawser has recently argued, leaders who send soldiers into war “have a duty to protect an agent engaged in a justified act from harm to the greatest extent possible, so long as that protection does not interfere with the agent’s ability to act justly.” In other words, if an A.I. weapons system can get a dangerous job done in the place of a human, we have a moral obligation to use it.
Of course, there are all sorts of caveats. The technology has to be as effective as a human soldier. It has to be fully controllable. All this needs to be demonstrated, of course, but presupposing the answer is not the best path forward. In any case, a ban wouldn’t be effective. As the authors of the letter recognize, A.I. weapons aren’t rocket science; they don’t require advanced knowledge or enormous resource expenditures, so they may be widely available to adversaries that adhere to different ethical standards.
The world should approach A.I. weapons as an engineering problem — to establish internationally sanctioned weapons standards, mandate proper testing and formulate reasonable post-deployment controls — rather than by forgoing the prospect of potentially safer and more effective weapons.
Instead of turning the planet into a “Terminator”-like battlefield, machines may be able to pierce the fog of war better than humans can, offering at least the possibility of a more humane and secure world. We deserve a chance to find out."
~~~~
Jerry Kaplan, who wrote the original version of the preceding op-ed piece, teaches about the ethics and impact of artificial intelligence at Stanford; he is the author of “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence. The original version of this op-ed appeared in print on August 17, 2015, on page A19 of the New York Times print edition with the headline: Robot Weapons: What’s the Harm?. This version was edited by Amanda Lanzone for the Opinion [web] Pages and published here: http://www.nytimes.com/2015/08/17/opinion/robot-weapons-whats-the-harm.html?ref=topics&_r=0
“THE development of full artificial intelligence could spell the end of the human race,” Stephen Hawking warns. Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. Bill Gates urges people to beware of it.
http://www.statsblogs.com/2015/05/11/the-economist-gets-in-on-the-ai-fluff/
Dear All,
A number of days ago (24 in fact), our friend and colleague, Ivo Carneiro de Sousa, responded here with a number of statements and questions of his own, relating to artificial intelligence, viz:
"The subject of human extinction either by artificial intelligence or an alien one is not only a scientific theme, but also field for the most excitant adventures of science-fiction, literature, cinema as well as key subject in scatologic religions, such as Christianity. The boundaries between all these domains and discourses is thin and for sure there plenty of migrations between all these sides. In concrete, the idea that artificial intelligence will one day erase human intelligence and thereafter humankind is based in a very mechanic, automatic, mathematic, logic or cartesian idea of intelligence. Truly, human intelligence is the opposite of a computer logic: it does a lot of mistakes, has a huge set of emotions, reacts through diverse feelings to the very same situations and is not clearly commanded by any rational-mechanic engineering. is artificial intelligence able to love and hate? Is artificial intelligence able to have compassion or to be merciful? Is artificial intelligence able to drink a glass of French red wine? is artificial intelligence able to drink a Brazilian "caipirinha" without feeling immensely happy if not "tropically" excited? ..."
Admittedly, the present answer to our friend's many questions (whose questions seemingly are pointedly designed to point-out the glaring deficiencies of present-day artificial intelligence systems when compared to human-beings ... a comparison which I argue is not altogether a fair one, nor, perhaps, is the greatest risk to our species coming from those AI researchers who are seeking to create in their AI-systems the emulation of human intelligence with full emotional capacity, morality, etc. Once, again, I think I should re-iterate, it is probably the cold-heartless robot-like NON-HUMAN artificial-intelligences one should keep their eyes on as risks to us and our kind ... not the "humane" variety, if such are ever emulated) is a resounding, "No!"
Nevertheless, I wish to point out, that in the area of tasting wines, which especially seems to be a human trait highly-valued by our respected colleague Ivo (among many others around the world), as particularly human, that some researchers here at our University of Texas have made phenomenal success in developing a microchip that will provide the "nose" for such an oenophile android of the future (but for the present, the researchers closely guard the development as a proprietary secret hoping to license a patented device to wine producers). Other researchers (in 2013 and early 2014) announced similar successes for work that may contribute to lessen the deficiencies of the hoped-for, dreamed-about replacement-vehicle for their worn-out--corporeal-bodies that billionaires looking for immortality wish to contribute toward hastening the more rapid development of (into which their accumulated life's experiences and memories, if not their actual physical brains, can be transferred, before their inevitable demise, along with the rest of us hoi polloi): (1) scientists in Denmark created a machine that can measure the dryness of a wine using nanosensors modeled on the sensors in our human mouths, (2) Japanese tech firm NEC added a new feature to its Papero personal robot project, way-back in 2005, that enabled the droid to analyze the fat and sugar content of food using infrared technology, (3) Spanish scientists showed-off an "electronic tongue" equipped with sensors that can distinguish between different varieties of beer with up to 82% accuracy (hell, that's better than I can do, even when I'm stone-cold-sober), (4) way-back in 2008, Swiss researchers unveiled a machine that could assess flavor of espresso by analyzing the fumes given-off when it is heated-up ... and that is only a small (and much-out-of-date) list!
No, none of that means the android or robot equipped with such fabulous sensors will ever have a single human thought, or feel a single human emotion, but really, ask yourself ... did it really matter to the person(s) who was(are) killed by "smart missiles" fired by "smart drones," initially dispatched by humans, yes, but continuing on patrol, and continuing to perform the prerogatives of its deadly missions, when controlled only by its own internal "artificial intelligence" programming, if through faulty communications-links or other problems, it becomes beyond the control of remote control centers. No, it doesn't matter ... no more than it matters that the gun that fires a bullet that kills a person never thinks about the pain or suffering it has caused ... or will cause ...
But the gun is a very simple machine, and not very dangerous without a human attached to it ... but if you provide it with wheels, an engine, sensors that see warm targets in the dark, and a thinking ability ... and a programming that direct that YOU are the enemy, to seek you out and shoot you ... then a simple gun, and the artificial intelligence that guides it to where you are hiding (whether or not it ever knows or "feels" the consequences of what it is doing) ... may become a very risky thing, indeed.
Regards,
Bob
Autonomous robots for use on the battlefield do not need to be intelligent. Rudimentary control systems are all that is needed and the smartest drone is only about as smart as an insect. While it is regrettable that humanity seeks to have this kind of weapon it needs to be stressed that 'smart' humans' are a far greater risk than drones in a war theatre.
The euphemism friendly fire is used to denote casualties caused to one's own side. The history of warfare is full of these incidents. In short it is not the technology that is the threat, without the 'political imperative' for war the technology would not need to be developed at all.
It is easy to imagine why we are terrified of the idea of smart drones relentlessly searching out humans to kill. It is a sobering though that a communication problem may make them uncontrollable but for thousands of years we have had uncontrollable forces in war theatres that mindlessly kill people, worse still many of those did the killing for pleasure or ideology.
If we ever do get robots capable of full autonomy on the battlefield we will inevitably get scenarios where 'rogue' robots kill non-combatants. or to use another military euphemism 'collateral damage'. Before we get too carried away by the Terminator Nightmare it might be an idea to look at war as a whole. Soldiers end up on battlefields because of political decisions taken in the capital cities of the combatants. If we build smart robot soldiers they too will be on the battlefield because of politics.
Those who control the drones and robots will be the same people who control the current human soldiers. They are called politicians and they have no desire to build robot versions of themselves, or see their privileged positions usurped. It is human greed and stupidity that causes wars, not soldiers and not robots. We have nothing to fear from artificial intelligence and there does not appear to be any drive towards designing artificial stupidity or artificial greed.
"Most of the time when we talk about silly scientific papers related to alien life, we're talking about crazy ideas for how to find aliens. But a new study in the Monthly Notices of the Royal Astronomical Society proposes a way of hiding from aliens. Humans are so fickle."
http://www.msn.com/en-us/news/us/scientists-have-a-wild-idea-for-hiding-us-from-evil-aliens/ar-BBrc1uy?ocid=spartandhp
We do not need to worry about hiding from aliens or autonomous AI. These are the modern day Hobgoblins, Witches and Ghosts and are used to deflect us away from real threats. This is Freudian displacement for the 21st century.
What we should perhaps concentrate on are war, pollution, poverty, corruption, disease and all the other real, tangible threats that kill tens of thousands daily.
Imaginary threats are used by politicians and other liars to keep us in our little boxes nicely hunkered down and compliant. They are also used by crackpot conspiracy theorists to satisfy their own fantasies.
Intelligence is not a threat to us, it is ignorance that presents a clear and present danger. Only humans use intelligence for malevolent purposes and it is where emotions control intelligence that the threat is manifested. Why would an intelligent machine be angry, jealous, intoxicated, bored or just downright nasty?
We do not even know how the human brain works or what intelligence really is and until that fine day no matter how fast computer processors get we are not going to see ‘thinking’ robots, let alone conspiratorial and destructive ones.
The same with aliens, they are out there alright but the chances of them coming here rely on us having got just about all modern physics wrong. It is highly unlikely there is any sentient life within 50 light years of Earth and that distance is not travelable by anything known to physics.
It might be an idea that governments focused their attention on dealing with problems we can do something about and forget about fantasies that belong to the plot of a Star Trek movie.
Dear Aleš
"Intelligence is not a threat to us, it is ignorance that presents a clear and present danger". I do believe in this conventional wisdom. But, reality belies it. As much intelligent people are, they exploit and kills as many people and go scot-free. If autonomous AI by chance or of its own intelligence fuse the circle that makes it slave of humans (to perform according to human orders), it will be faster in calculation, reflexes and take into account whole history of humans and will make it slave or destroy as useless dumb fellows. Traditional technology did not kill as many human beings as modern technology does which is developed, executed and ordered by highly educated and intelligent people who are highly democratic and world leaders in every sphere.
There is a great article in this week’s New Scientist about the huge limitations on computational ability. The problem lies with the huge increase in power consumption as Moore's law is applied. While theoretically computational power can continue to expand the technology in the rest of the hardware has some serious problems to overcome. The biggest one is power consumption, supercomputers still a ‘cockroach’ compared to human intelligence need a power supply that would light a small town. Present technology does not have an answer to providing the amount of power that 'thinking' like a human could consume without melting the components of the artificial 'thinker'.
The human brain is according to the NS 10,000 times more energy efficient in terms of ‘flops’ per second than any computer. This is reminiscent of the old problems encountered by aviation engineers, the infamous power to weight ratio. Theoretically you could get Mount Everest to fly but the cost of take off might be rather high.
Ales Kralj said: " I believe that invention of nuclear arms is the most positive single invention in the history of Homo sapiens."
Dear Ales,
I believe your comment reveals a childish naiveté concerning human behavior. You appear to believe that nuclear weapons have not been used because it is recognized that their use would be bad for all ... so they have been an effective deterrent. That is only true as long as completely rational persons are assumed to ALWAYS remain solely in complete control of them.
Need I remind you that the nuclear conflagration can be started by the push of ONE mad/irrational (or angry) man's finger on a red button. We have simply been lucky in that, so far, the fingers of sufficiently mad and angry persons have been kept-away from the trigger. But in the moment of the decision, when it comes, the trigger will be pulled by one finger ...the opinions, knowledge and good-sense of all the rest of us ... the effects on all of humanity combined will not even be considered ... it won't matter one whit.
You apparently haven't met many Pakistani generals ... or crazy North Korean leaders, whose fondest hope (into which purpose he is pouring all his nation's resources) is the development of a nuclear capacity that can strike the United States without fail. There is little doubt that WHEN he has the capacity to strike the US without fail, he intends to do so. If the Ayatollah of Iran acquires a nuclear capacity, I believe he will not hesitate to look for the earliest opportunity to deploy it in a strike against Israel (and, perhaps, also Saudi Arabia). The deaths of millions of Iranian citizens which would be the unavoidable outcome of such an action, and even his own sure vaporization, would not deter him in the slightest, because his belief is that they all would instantly become martyrs in heaven for carrying-out what he believes is the will-of-God.
So it seems (God forbid) just a matter-of-time until you may realize how foolishly you place your trust in (the MAD policy of) nuclear weapons as a deterrent to world-war and the self-destruction of humanity. The time to such destructive fate is likely to be much shorter if a foolish megalomaniac, like Trump, is ever allowed to control the trigger.
And then, there is the matter of your naiveté concerning Artificial Intelligence, believing that intelligence must accurately simulate (or out perform using the same criteria of performance or definition as) human intelligence in order to qualify as "intelligence" ... or thereby to become something dangerous to humans. I don't conceive of Artificial Intelligence as being necessarily a machine, like a computer running a human-brain-like operating system ... or even necessarily a machine of any kind. IMO Artificial Intelligence will likely be achieved only through biological engineering.
I agree with your contention that we have little to fear from AI ... the risk is very insignificant ... IF we CONTINUE to define AI as being limited or restricted solely to being the simulacrum or exact replicant of the human brain made of silicon (or whatever improved modern technology ... a task which is never likely to be accomplished) ... but if we should only succeed in making a machine (or why not even a biologically engineered organism ? using and "improving" upon the human brain as a model ... with biological engineering ?) that ONLY has self-awareness (sentience) and free-will, and a bare minimum of simulated "human-like" intelligence, it will not matter upon what model or scale one measures its level of intelligence ... certainly not a comparison to how it measures-up to a human brain ... for its danger to humanity will lie in ever recognizing us as competitors, and DECIDING it needs to eliminate competition (say, for example, competition for limited resources for sustaining or replicating itself ... competition for the resources for sustainability and self-replication which has been the underlying cause of all human wars). The capacity for evil intent (nor emotion or morality of any kind) need ever have to ever enter-the-calculation ... a simple grasp of mathematical formulae would be sufficient for such a machine to realize (calculate) that it would be to its advantage to eliminate competitors. 1+1=2 ... Devising a method of doing that MIGHT require a somewhat higher level of intelligence than a cockroach (then again, maybe not, the task might be relatively simple ... like a computer hacking-into [or like an insect crawling into? or a genetically-engineered virus remote-controlling a human who has access ? ] the control-and-launch systems of a country with an arsenal of nuclear missiles).
And ask yourself how much intelligence and sentience virii possess? They apparently haven't required ANYTHING that we would equate with intelligence (as measured against a human brain) to become some of the most successful "enemies" [if we are measuring enemies by kill-rates] (competitors) who have ever challenged our very survival. However, lack of ANY (human-like) intelligence has not prevented them from success in fulfilling the primordial imperative of survival and replication. In fact, some of them have even been "clever" enough to inject themselves into OUR human DNA! In effect, haven't these virii co-opted (already defeated / enslaved) us for their own "purposes?"
It is only recently that this discovery (that we are not entirely our own selves, but at the most basic level of our existence, in our very DNA, we carry the enemy and the potential seed-of-our-destruction) research has begun to try and understand what role(s) these viral segments (a couple of them are apparently incorporated en toto, and thus, may have the potential for revival / resurrection) may play in human diseases.
And Ales, on the scale of human-intelligence, the cockroach is not very smart, but it is vastly more intelligent than the virus... plus, it is so successful that it has survived (relatively unchanged) millions-of-years when other much-smarter creatures have had to either adapt rapidly or become extinct ... and the cockroach was old before man and his ancestors even started clawing their way up the evolutionary ladder ... and nowadays cockroaches "compete" effectively with humans in every country and society on earth.
Regards,
Bob
"But, as it happens, the DNA in our own cells isn’t solely ours, either. More than eight percent of the human genome is not human at all—it’s from viruses. And scientists are still digging up yet more viral code from human DNA that may well influence our lives."
For more on the recent discovery of virus DNA comprising part of the human genome and the implications for further research, please see:
https://www.researchgate.net/post/What_are_the_most_important_unresolved_questions_right_now_in_genomics_research/5
"In January, North Korea conducted its fourth nuclear test and in February launched a long-range rocket, angering even its closest ally China, and prompting the UN Security Council to impose more sanctions on the reclusive state.
In Washington DC on Thursday, Chinese President Xi Jinping called for dialogue to resolve the "predicament" on the Korean peninsula.
In an interview with Al Jazeera, Einar Tangen, a political affairs analyst, said North Korea is increasingly defying its closest ally.
Pyongyang's latest action, he said, "is a slap to the face of Xi Jinping, a tremendous loss of face as he is meeting with Obama about nuclear issues".
Meanwhile, So declared that North Korea is "going on our own way. [We are] not having dialogue and discussions on that", when asked whether Pyongyang felt pressure from Beijing.
So also said "the de-nuclearisation of the peninsula has gone", when asked about the resumption of stalled six-party talks on his country's nuclear programme."
http://www.msn.com/en-us/news/world/north-korea-to-pursue-more-nuclear-deterence/ar-BBrelmq?ocid=spartandhp
During the Cold War there were a number of occasions when we got close to the brink but the most dangerous one was Operation Able Archer. A crazy scheme designed around a NATO first strike on the Soviet Union and played out almost entirely in the electronic world.
There was among some 'strategists' a belief that it was possible to not only survive a nuclear war but effectively 'win' one. This lunacy is still adhered to today by some but none who have actual control over nuclear weapons or who are ever likely to get hold of a strategic arsenal of them.
Kim Jong Un and his regime represent no real threat to the US and the world. His absurd posturing and North Korea's hysterical sabre rattling is for internal consumption and the survival of the regime depends on it. It is highly unlikely that those enjoying the benefits of absolute power in NK would be keen to see that go up in a mushroom cloud.
The leaders in the west love sound bites and drama too. There has been an increase in the use of the absurd expression 'existential threat' in the past couple of years applied to every loony toon on the planet from ISIS through Iran and onto NK. None of these regimes presents any existential threat except to itself.
North Korea may one day be able to launch a very limited strike on the US but it would be the last thing it did. Kim Jong Un may be a nasty little piece of spoilt brat work but he has not shown any signs of being suicidal. Those who lead Iran too are crazy in many respects but don't seem all that ready for their own martyrdom, however much they may urge it on others.
Even ISIS, the craziest of them all is run by people who enjoy the Champagne lifestyle and the availability of multiple sex partners on Earth too much to risk any 'martyrdom' going wrong.
History is full of fanatics threatening world domination or destruction but they all pull back from the brink. Emperor Hirohito and most of his 'cabinet' declined the honourable death they could have had in an explosive filled Zero. Osama Bin Laden sought to hide behind women and children rather than glorious martyrdom. Even Hitler, armed to the teeth with the most destructive of chemical weapons never used them because right up until the end he thought he would survive and by the time he chose to kill himself he no longer had control.
If mankind ever does destroy itself it will be the long lingering death of pollution, poverty and disease that sees it off, brought about by the kind of greed and selfishness of those who profit on this Earth. Not the spectacular fireball of nuclear martyrdom of those that seek their profits in the next world.
New York on the 11th of May
Of '97 ... a fateful day,
A machine, no less,
Sat down to play chess,
And proved that it really can play!
Mankind really has nothing to fear,
From conniving computers, my dear!
'Tis true that Deep Blue won,
Had its day in the sun,
But its Master the true winner here!
~Skyelz '97
But, one should also pay heed to Stephen Hawking’s warning on the occasion of getting new speech synthesising apparatus.
http://www.bbc.com/news/technology-30290540
http://www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-artificial-intelligence-could-wipe-out-humanity-when-it-gets-too-clever-as-humans-a6686496.html
Google has succeeded in creating a machine capable of composing its own music.
http://www.msn.com/en-us/video/wonder/google-made-a-machine-that-can-compose-its-own-music/vi-BBtLqGU?ocid=spartandhp
Dear Bob,
After a long time I come to this thread. Composing music using a rule based system is not very difficult. In Indian classical system a Ragaa is very formally defined. It has a "badi" (the most important and a "sambadi" the second most important notes. It has also defined "pakaurhs" the legal combinations. Using these definitions possibly one can synthesize mechanically a piece of music belonging to that particular "Ragaa". I think we can also consider "Ragaa" as a Markovian process for its synthesis.
Therefore, composing Music is nothing fantastic it is an ordinary academic exercise. It is definitely not a threat against Humanity. The real threat begins when someone starts believing that the music composed by a computer will outcast the human creations. I mention this because such beliefs are already present in education sector. Proponent of online tutoring system started thinking that very soon the human teachers may become obsolete in education system. RG has seen such use of the word “obsolete” in this connection. Online education system is necessary in certain situation and its development is therefore necessary but that does not mean that people should unnecessarily threatened or over enthusiastic about such systems.
"[Elon] Musk said he thinks there’s a “one in billions” chance that we’re not living in a computer simulation right now, meaning Musk is a firm believer in the hypothesis that a super intelligent artificial intelligence created the universe as we know it."There’s a one in billions chance we’re in base reality"
“The strongest argument for us being in a simulation, probably being in a simulation, is the following: 40 years ago, we had [the first video game] Pong, two rectangles and a dot,” Musk said. “That is what games were. Now 40 years later we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, augmented reality, if you assume any rate of improvement at all, the games will become indistinguishable from reality.” [emphasis added]
https://youtu.be/2KK_kzrJPS8
http://motherboard.vice.com/read/elon-musk-simulated-universe-hypothesis
Dear Bob,
I did not know the existence of these writings. I believe you have studied them. My question is are they hypothesis or there exist at least some effort to prove it?
Anup,
Musk's debate originate from the writing of Bostrom.
Checkfor example:
Bostrom, Nick (April 2003). "Are You Living in a Computer Simulation?". Philosophical Quarterly 53 (211): 243–255.
These arguments as far as I know are merely hypothesis
Dear Arturo, Bob and others,
Just now I have gone through the paper Bostrom, Nick (April 2003). "Are You Living in a Computer Simulation?". Philosophical Quarterly 53 (211): 243–255. Since Philosophy is not my cup of tea. I should not claim that I have understood everything. However, what I found is that the main argument is centered around the availability of enormous computational power. From common sense arguments, I believe, simulation of a human mind needs something more than the computational power and better software. Do we really understand why we consider a rose beautiful? Do we know why we fall in love, why we feel sorrow when somebody you love gets hurt? Unless we can mathematically model these emotions we will never be able to simulate ourselves.
I admit that we do not know who created us and why we are created. But ignorance does not give the liberty to hypothesize something like an imaginary tale.
I don't think so even-though human has done so much research in its applications, i still believe there is hope for human race
Physics
Are We Living in a Computer Simulation?
High-profile physicists and philosophers gathered to debate whether we are real or virtual—and what it means either way
By Clara Moskowitz on April 7, 2016
NEW YORK—If you, me and every person and thing in the cosmos were actually characters in some giant computer game, we would not necessarily know it. The idea that the universe is a simulation sounds more like the plot of “The Matrix,” but it is also a legitimate scientific hypothesis. Researchers pondered the controversial notion Tuesday at the annual Isaac Asimov Memorial Debate here at the American Museum of Natural History.
Moderator Neil deGrasse Tyson, director of the museum’s Hayden Planetarium, put the odds at 50-50 that our entire existence is a program on someone else’s hard drive. “I think the likelihood may be very high,” he said. He noted the gap between human and chimpanzee intelligence, despite the fact that we share more than 98 percent of our DNA. Somewhere out there could be a being whose intelligence is that much greater than our own. “We would be drooling, blithering idiots in their presence,” he said. “If that’s the case, it is easy for me to imagine that everything in our lives is just a creation of some other entity for their entertainment.”
Virtual minds
A popular argument for the simulation hypothesis came from University of Oxford philosopher Nick Bostrum in 2003, when he suggested that members of an advanced civilization with enormous computing power might decide to run simulations of their ancestors. They would probably have the ability to run many, many such simulations, to the point where the vast majority of minds would actually be artificial ones within such simulations, rather than the original ancestral minds. So simple statistics suggest it is much more likely that we are among the simulated minds.
And there are other reasons to think we might be virtual. For instance, the more we learn about the universe, the more it appears to be based on mathematical laws. Perhaps that is not a given, but a function of the nature of the universe we are living in. “If I were a character in a computer game, I would also discover eventually that the rules seemed completely rigid and mathematical,” said Max Tegmark, a cosmologist at the Massachusetts Institute of Technology (MIT). “That just reflects the computer code in which it was written.”
Furthermore, ideas from information theory keep showing up in physics. “In my research I found this very strange thing,” said James Gates, a theoretical physicist at the University of Maryland. “I was driven to error-correcting codes—they’re what make browsers work. So why were they in the equations I was studying about quarks and electrons and supersymmetry? This brought me to the stark realization that I could no longer say people like Max are crazy.”
Room for skepticism
Yet not everyone on the panel agreed with this reasoning. “If you’re finding IT solutions to your problems, maybe it’s just the fad of the moment,” Tyson pointed out. “Kind of like if you’re a hammer, every problem looks like a nail.”
And the statistical argument that most minds in the future will turn out to be artificial rather than biological is also not a given, said Lisa Randall, a theoretical physicist at Harvard University. “It’s just not based on well-defined probabilities. The argument says you’d have lots of things that want to simulate us. I actually have a problem with that. We mostly are interested in ourselves. I don’t know why this higher species would want to simulate us.” Randall admitted she did not quite understand why other scientists were even entertaining the notion that the universe is a simulation. “I actually am very interested in why so many people think it’s an interesting question.” She rated the chances that this idea turns out to be true “effectively zero.” [article continues at link]
http://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/
Bob,
'' full (strong) artificial intelligence'': this is the new version of a recurring human myth: mechanical intelligence. Since Darwin, this myth has grew stronger among science fiction writers, engineers and scientists. Just a sample here:
"Darwin among the Machines" is the name of an article published in The Press newspaper on 13 June 1863 in Christchurch, New Zealand. Written by Samuel Butler but signed Cellarius (q.v.), the article raised the possibility that machines were a kind of "mechanical life" undergoing constant evolution, and that eventually machines might supplant humans as the dominant species:
''We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.
...
Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.''
Why is this myth so powerfull and attractive to the modern Mind? Is it our fetishism tendency? Is it a technological ideolization? Is it a sick machine worship black magic inherent in modernity that needs to externalize itself?
Dear Ales,
I had not known that any auto manufacturers (the smarter ones ?) had come to the inevitable realization that human "labor" can be procured (& maintained) more cheaply than robots, and had already taken steps toward reversion. I wonder if/when any of the American manufacturers may have plans to follow suit?
Regards,
Bob
We are certainly not living in a simulated world. Not in the sense that we create simulations with computers. The dreams we experience are simulations . We can rely on our senses which tell us we are not a game or simulation. We live in a real world and our intelligence makes us aware of what our senses measure from the outside. A super being intelligence may exist but it did not create the universe and cannot change the history.
Ales,
I would be cautious on stating that based on silicon development we have hit a computing development plateau (Computer hardware is one thing, how we use it is another and while there is correlation it does not imply causality). There have been advances in GPU technology that have further advanced computing power at lower power consumption providing still a drive for enablement of computer algorithms. Also take into account that computer systems as we know them still are wasting computational cycles(Wirth's law). Also one of the other issues which I am careful in doing comparisons is comparing computational capacity with human intelligence capacity. These can compared but the development of computational capacity does not have to correlate to human intelligence capacity to outpace it. As a matter of fact they have outdone human capacity in some tasks (you cannot simply draw conclusions either way on this fact as to future limitations or not). I simply find fallacious the comparison of human neuron connections to computer capacity. The fallacy in the argument is that it takes it as a given assumption that to develop intelligence we have to have an equivalent brain and then does a fallacy of generalization(all intelligence must equal human intelligence). We have already violated the beginning assumption by using silicon and not wetware. we have also violated the assumption of using "neurons" and we merely use approximation via simulation ( I still have found no documentation on the plausibility of contrastive divergence being biologically plausible, if someone has please let me know).
Anup,
The problem with beauty sorrow, etc. is not the mathematical modeling but on defining what it entails. Once defined, these concepts are easy to model. To your second argument, I do agree that in its current formulation,the theory does currently hold little scientific value(does it predict phenomena that is testable).
I discover this discussion. According to Kurzweil, over the next few decades, humans will be a hybrid of biological and non-biological intelligence that will become increasingly dominated by its non-biological component (see enclosed link to Wikipedia article). However, as Ales puts it (and I agree with it) “…silicon computer development has hit its entropy wall in ca. 2010. Since then there has been no computing power increase”. So, there is probably no real danger from the side of non-biological components. But, going a bit farther, Ales and others add : “If there is any danger, it comes from future organic AI and organic robotics”. If we combine this with a.m. Kurzweil prediction, humans could become a hybrid of biological (innate) components and non-human but organic AI components that could become increasingly dominating …. Then, the development of full (strong) AI will probably not inevitably lead to the demise of the human race, but rather to the development of superhumans (!). This sounds like science-fiction, although …. (?)
https://en.wikipedia.org/wiki/Ray_Kurzweil
Nick Bostrum's paper is anailable in the following link
https://www.fat.bme.hu/student/pub/Programozas3/SimulationArgument.pdf
Dear Arturo,
I quote from your answer “The problem with beauty sorrow, etc. is not the mathematical modeling but on defining what it entails. Once defined, these concepts are easy to model“ If I understood correctly what you mean by this is that the crux of the problem lies in the understanding of these concepts. I do agree. Unless we have complete knowledge about ourselves it will not be possible for us to achieve true simulation of a single person even with infinite computational power. Computational power is not the real hurdle.
We develop our knowledge by assuming a set of axioms. The purpose of our research is to minimize this set. The first one in this set is “we exist”. Unless we know “why do we exist” , our knowledge will not be complete. The axiom that we are being simulated by some more intelligent group sounds similar to the notion of “GOD” !.
AI debate is something like the alternative universe or meaning of life debates. A great way to spend a few spare hours but not likely to ever come up with an answer.
Intelligence is only part of the psyche needed for power. History is full of stories of the super intelligent being defeated by the boorish and the brutish. If we can build that into computers then we may have a problem. Humanity is no more defined by intelligence than it is by its weight lifting capacity or sprinting speed. Dockside cranes can lift far more than we can and Ferraris' tend to move faster. That does not mean they are about to take over the world.
There is no empirical evidence that computers have or ever will have the capacity for sentience. The ability to play chess as a master is not an indication of high intellect it is an indication of high specialisation. Doing millions of computations a second does not require intelligence, only processing speed. Intelligence is not measured in terms of speed.
Humans have incredibly acute imaginations. They can imagine sentient androids and multiple universes. They can dream up hobgoblins and fairies at the bottom of the garden. None of that has ever resulted in any of these dreams coming true.
Stephen Hawking is undoubtedly one of the cleverest people alive and is justifiably ranked along with Newton and Einstein. That does not however make every one of his pronouncements a 'prophecy'. Newton and Einstein frequently talked b*****t too.
There is no danger from intelligent machines or little green men from outer space. The little green men are out there no doubt but we will never see them here.
Here are a few old AI speculations:
''Hephaestus built automatons of metal to work for him. This included tripods that walked to and from Mount Olympus. He gave to the blinded Orion his apprentice Cedalion as a guide. In some versions of the myth[citation needed], Prometheus stole the fire that he gave to man from Hephaestus's forge. Hephaestus also created the gift that the gods gave to man, the woman Pandora and her pithos. Being a skilled blacksmith, Hephaestus created all the thrones in the Palace of Olympus.[13]
The Greek myths and the Homeric poems sanctified in stories that Hephaestus had a special power to produce motion.[15] He made the golden and silver lions and dogs at the entrance of the palace of Alkinoos in such a way that they could bite the invaders.[16] The Greeks maintained in their civilization an animistic idea that statues are in some sense alive. This kind of art and the animistic belief goes back to the Minoan period, when Daedalus, the builder of the labyrinth made images which moved of their own accord.[17] A statue of the god was somehow the god himself, and the image on a man's tomb indicated somehow his presence.''
https://en.wikipedia.org/wiki/Hephaestus
Aleš,
"There will still be small improvements.", this is the point that I disagree computational power does not equate to advances in AI. The suppositions that you require more computational power to make a competing AI is based on biological equivalence not on AI itself and current reductionist views on what biology incomplete understanding of the brain is. These assumptions need not be followed and as a matter of fact are biased Anthropomorphism judgments of what AI must look like.
Anup,
You drove the point precisely