I think one can find at least 10 such questions like this. Do we intend to collectively develop a science fiction or we are suffering from a tremendous inferiority complex. There can definitely be another choice like “none of the above”.
I asked MY question on this topic (at link below) because I was interested in knowing why other people (including some of the world's most intelligent scientists) seem to have so much anxiety about AI. At the time I posted my question, I wasn't aware that the topic was so popular and the question had already been asked a number of times and a number of ways (I was new to RG and hadn't looked-around very much ... nowadays, I do some rsearch before posing a new question)
I appreciate up-votes, not so much because they affect my RG score, but because they (gratifyingly) reflect the reader's interaction with [or approval of] my question or comment. I do not like receiving down-votes (but they do not discourage me from asking controversial questions, nor providing my honest and straight-forward opinions whenever I respond to another's question (even when I know that on some questions, ANY response is likely to receive some down-votes). As you know (from our previous interactions on several questions) I don't like the very concept of down-voting (and like you) believe that it is wrong-headed (and anti-science) to have the capacity to (anonymously) down-vote a comment. If scores must be kept, at all, then it makes more sense (except, apparently, to the RG neo-nazis who gave birth to this weird concept of down-voting as a means of scoring scientific merit/reputation?) to use an up-vote for those responses deemed worthy, and simply refrain from voting on those which you disagree with. Down-voting (since it affects the public score) MAY be construed as expressing disapproval with a person's ideas/comments, but an anonymous down-vote must ALWAYS also be construed as partly a sneaky/under-handed/cowardly ad-hominem attack on the person. So, for those reasons, I have never down-voted ... even in (justifiable ?) retaliation against those persons (only one, so far) who have instituted a down-voting vendetta against me, or those who have made vicious ad-hominem comments about me (more than one, but still rare).
In any case, I don't (personally) have any real fears/anxieties (or feel any inferiority / inadequacies about my own brain-power) about strong-AI being developed (anytime soon, nor probably ever, if we consider only non-biological AI) to supersede the intelligence of humans (when measured as a simulacrum of the human mind), but I do worry that some people seem to have a fixation on a ridiculously anthropocentric idea that it's ONLY human intelligence that matters, or that can ever become dangerous to the survival of our species. That we are safe, so long as a human-like strong-AI is never developed with a higher IQ than humans. That idea (and false sense of security, predicated upon the anthropocentric belief that our human intelligence is somehow god-like and sacrosanct and must remain superior forever to all intelligence), is far more ridiculous (unscientific belief) than any "Frankenstein or mad-scientist" stories ... and, such complacency (prideful head-thrust-in-the-sand pooh-pooing denial) is (probabilistically) far-more-dangerous to the survival of humankind than the mythological anthropogenic global-warming (for one example of a laughable boogey-man that is taken far-too-seriously by a lot of people who should have the intelligence to know better).
My best regards,
Bob
PS - have you ever considered that your belief that man's intelligence can never be superceded is a denial of God? If you believe in God, do you consider your own intelligence to be superior to God's? Doesn't it present a dilemma to both believe that mankind's intelligence can never be out-done, but also at the same time to believe in an omniscient/omnipotent Creator?
"Pride goeth before destruction, and an haughty spirit before a fall." ~ Proverbs 16:18. [Please ignore that this quote is from a holy book; not-respecting its origin/source/sanctity, it is still a very apt observation of human nature].
Do we intend to collectively develop a science fiction or we are suffering from a tremendous inferiority complex?
There could be other possibilities. The fact that they are asking these questions does not imply that they support the idea of AI dominating/superseding the human brain. As Aleš Kralj mentioned, people are concerned about such questions and these questions get more attention and more responses.
The motivation to ask questions could simply be just out of pure curiosity , desire to know more about this topic.
I suggest that you share the questions with the persons who asked these questions. They are best candidates for answering your question.
It is a recurring question. The litterary style of ''science fiction'' that appeared at the end of the 19th century in the era of the emergence of the evolutionary ideas on life and the cosmos naturally lead to the question of where is human evolution leading. One of the answer was that it was leading to the construction of machines superior to their human constructors. Multiple science fiction stories has fed the collective consciousness with numerous version of this scenario. And debate on strong AI are fed by these fictions that we acquired in our youth without having the chance to examine seriously the premices of these scenarios.
Early in 19th century (1818), Mary Wollstonecraft Shelley wrote : ''Frankenstein; or, The Modern Prometheus'' that tells the story of a young science student Victor Frankenstein, who creates a grotesque but sentient creature in an unorthodox scientific experiment.'' There are a lot of parallel between the robot take over stories and the Frankerstein's story and with a lot of mad scientist stories.
I asked MY question on this topic (at link below) because I was interested in knowing why other people (including some of the world's most intelligent scientists) seem to have so much anxiety about AI. At the time I posted my question, I wasn't aware that the topic was so popular and the question had already been asked a number of times and a number of ways (I was new to RG and hadn't looked-around very much ... nowadays, I do some rsearch before posing a new question)
I appreciate up-votes, not so much because they affect my RG score, but because they (gratifyingly) reflect the reader's interaction with [or approval of] my question or comment. I do not like receiving down-votes (but they do not discourage me from asking controversial questions, nor providing my honest and straight-forward opinions whenever I respond to another's question (even when I know that on some questions, ANY response is likely to receive some down-votes). As you know (from our previous interactions on several questions) I don't like the very concept of down-voting (and like you) believe that it is wrong-headed (and anti-science) to have the capacity to (anonymously) down-vote a comment. If scores must be kept, at all, then it makes more sense (except, apparently, to the RG neo-nazis who gave birth to this weird concept of down-voting as a means of scoring scientific merit/reputation?) to use an up-vote for those responses deemed worthy, and simply refrain from voting on those which you disagree with. Down-voting (since it affects the public score) MAY be construed as expressing disapproval with a person's ideas/comments, but an anonymous down-vote must ALWAYS also be construed as partly a sneaky/under-handed/cowardly ad-hominem attack on the person. So, for those reasons, I have never down-voted ... even in (justifiable ?) retaliation against those persons (only one, so far) who have instituted a down-voting vendetta against me, or those who have made vicious ad-hominem comments about me (more than one, but still rare).
In any case, I don't (personally) have any real fears/anxieties (or feel any inferiority / inadequacies about my own brain-power) about strong-AI being developed (anytime soon, nor probably ever, if we consider only non-biological AI) to supersede the intelligence of humans (when measured as a simulacrum of the human mind), but I do worry that some people seem to have a fixation on a ridiculously anthropocentric idea that it's ONLY human intelligence that matters, or that can ever become dangerous to the survival of our species. That we are safe, so long as a human-like strong-AI is never developed with a higher IQ than humans. That idea (and false sense of security, predicated upon the anthropocentric belief that our human intelligence is somehow god-like and sacrosanct and must remain superior forever to all intelligence), is far more ridiculous (unscientific belief) than any "Frankenstein or mad-scientist" stories ... and, such complacency (prideful head-thrust-in-the-sand pooh-pooing denial) is (probabilistically) far-more-dangerous to the survival of humankind than the mythological anthropogenic global-warming (for one example of a laughable boogey-man that is taken far-too-seriously by a lot of people who should have the intelligence to know better).
My best regards,
Bob
PS - have you ever considered that your belief that man's intelligence can never be superceded is a denial of God? If you believe in God, do you consider your own intelligence to be superior to God's? Doesn't it present a dilemma to both believe that mankind's intelligence can never be out-done, but also at the same time to believe in an omniscient/omnipotent Creator?
"Pride goeth before destruction, and an haughty spirit before a fall." ~ Proverbs 16:18. [Please ignore that this quote is from a holy book; not-respecting its origin/source/sanctity, it is still a very apt observation of human nature].
I have to admit one problem , I find it hard to stop thinking about technology and addressing solutions based on what I know in terms of math/ technology. I have gained it in the name of growing up and feel comforted when i discuss it with people.
I guess , to create a momentum in the market be it research / product development automation is the key . Sadly , we have limited ability to give away control . At least programmers can tell where intelligence /logic lies.
Fiction may be , thanks to Hollywood our AVATAR has come into existence.
Fortunately what is humane is we are pretty "RESILIENT" this is intelligence in itself.
Probably that is why the connotation "artificial" , carrying out tasks just commanding execution nothing EXTRA ...
Future: Increased control in the hands of intelligent consumers /creators ...
Maybe people want to increase their RG score or maybe they really have interest for technology and human life.
We humans are consciousness in a body. Some great scientists are working on a big project to transfer our consciousness to an avatar. They really believe it will be possible. People like Sir Roger Penrose and many others believe it.
I suggest you check the global initiative GF2045. It may now sound crazy but their are looking for inmortality by year 2045. They are trying to transfer our minds into an avatar so that we can leave our material bodies and live longer.
I think this is a bit far from your initial question but you reminded me this project. Let's give it a try!
I am absolutly not afraid of strong AI because I think it is a total myth. What I am afraid of is the hidden reasons why the myth is popular. What worry me is not that the myth might become reality. What worry me is the hidden force in the collective spyche that held this myth alive.
I fully agree with Luis and possibly this is the reason why I ask this question.
It may be slightly unrelated to this thread but I found another category of questions in RG expressing a view like very soon human teachers will become obsolete in the total education scenario. They will be replaced by the AI based computer aided online systems. Some are advocating its implementation. I think this is an extremely worried situation because people started believing that AI is an all powerful supreme deity that can do anything and everything.
I quote from your answer “have you ever considered that your belief that man's intelligence can never be superseded is a denial of God?” If so then how can you say that AI can supersede our intelligence when it is created by us? Isn’t it a logical fallacy?
I guess , if we are aware of technology and its significance , we will not worry about the invasion .
Market momentum will be created for a new technology and research will (to some extent) work for its realization . Yet "life sciences" @ least in Canada is very very significant ...
There would only be a fallacy IF we adopt a static viewpoint (as you seem to) that the AIs "created" by us shall never "evolve" beyond the initial capabilities that we initially impart to them. A strong AI may not only develop the capability of self-maintenance and self-replication, but may ultimately develop the ability to improve [evolve] itself ... to make each successive generation more capable and more intelligent than the preceding one ... and to accomplish this much faster than we humans who are saddled with very slow biological reproduction. Once AI develops to a certain "strength" [the capability for independent progressive learning, coupled with self-replication], it COULD then theoretically outstrip humans [even using human intelligence as the measuring scale] in the intelligence race.
There is no fallacy unless we assume that AI will never have the ability to evolve. To me that seems tantamount to believing that time shall stand still.
Life has been evolving and reproducing for almost 4 billion years on this planet and human civilisations are the results of all this evolution. Why on earth would this living evolution start over from a silicon base? Since the invention of computers, computers have been mostly usefull for making our tools better and for helping us communicate better. In other words, they are gradually play the role of our planetary nervous system. That nervous system by itself is totally stupid but it make us smarter, and the more and more smarter because it help us to be a better social being. I do not believe the science fiction author predicting a rebellion of the machines as if the machines had a will of their own or as if we would know what a will of our own really mean. It is like predicting artificial limb will rebel. Or heating systems will rebel and since they get better every years they will outsmart us!!!
I assume human evolution shall continue at pretty-much the same rate as it always has in the past couple-of-million years ... certainly not a constant rate forward toward an increasing intelligence, but "progress" in fits-and-starts, as is the inherent constraint of biological systems. However, there isn't the same theoretical constraint imposed by [extremely slow biological evolution ... and reproduction] on autonomous AIs, that MAY be able to direct and accelerate their own "evolution" [and reproduction] at a faster rate than that of human intelligence. The "danger" of human intelligence being surpassed by its "silicon slaves" lies in the RATE at which AIs may increase their intelligence relative to humint. That is before considering just the raw rate of reproduction ...AIs may more readily use factories to stamp-out units at a very high rate of production [so-what if they are not smarter than you ... chimpanzees could not beat me on an IQ test, but they can be formidable adversaries in hand-to-hand combat ... or "gang" warfare ... and how many half-as-smart-as-humans robots does it take to present a real-and-present danger to the survival of the human species, when they have co-opted control of the resources we need for survival ... say, by poisoning the water supplies ... or making 90% pf the earth's surface sterile from intense radioactivity]. In any case, when it comes to the success of one species over another, rate of reproduction (overwhelming by numbers, not IQ) has often meant a lot more than intelligence (once, again, I point-out the numerous successful microbial creatures who, with intelligence that won't even make a mark on the humint scale, have already "defeated" the human genome ... inserting themselves as parts of us ... changing the composition of our very DNA ... I would point-out, also, whole creatures that have apparently been subsumed within ourselves, as part of our composite [an old proverb comes to mind ... "if you can't beat (out-smart ?) them, then join them"] ... yes, we are clearly composites of much older and simpler creatures ... for example, the mitochondria ... and even the various blood cells that flow through our bodies). Haven't these creatures gained the benefit of the vast intelligence evolution has chosen to gift us with, without having ANY measurable intelligence of their own [and that's not even considering the value of the resources and protection we provide toward their replication success], just for consenting to ride-along in our vehicle?
I believe you (and Louis, as well) may have difficulty perceiving reality because you are reluctant to admit that humans are just animals that have been favored by evolution with an unusual level of intelligence [sentience], and we may not be gods (nor favored creations made by the hand of God ... in HIS image)?
Let us try to understand what is “evolution” in connection with AI’s? AI is a program which runs on a digital computer. The program has a finite number of states that are reachable from its initial state. I stress the word finite because the program is running on a finite machine. The procedures (I do not mean procedures in the formal sense that is understood in a programming discipline) that are executed may follow some algorithm or may be evolutionary in nature. If it is algorithmic there is no threat because they are designed by using human intelligence and therefore the ultimate characteristics are known. The threat may be apprehended when non-algorithmic executions are allowed. However the structure of all such systems is such that it strives to optimize a so called fitness function. And the fitness function again is designed by using human intelligence. Tell me one instance where unexplained (unexplained by human intelligence) behavior of an AI system occurred.
Moreover the advantage of the biological system is that it is self maintainable and reproducible. Show me a single instance where an AI system could produce (fabricate) the necessary hardware on which the so called more intelligent program may be installed.
One may think about Silicon based biological system but until now it is nothing but “Science Fiction”.
Part of serious discussion !!!!! Sad as I always say , only operation we successfully taught hardware is to ADD . (Eg: FPGA carry chain ) , multiplier has a fundamental block of an adder.
Lizard can regrow its tail but if anything that is external to us our system I think will react making things worse.
As far as free will , this is sadly "on-the-go- situation" not a premeditated one . If that is the case then history has worse stories (human based) , I agree AI is better :)