It's often said that computing power and intelligence would exceed the collective intelligence of humans. Can computers form their own societies (with social judgements and emotions) without human intervention?
As someone actively involved in AI research, machine learning, and natural language semantics, I'm fairly confident that all this AI/technological singularity stuff is complete pseudo-scientific whack. I am not discounting the idea/dream of creating an independently thinking machine some day, but I'm fairly confident we are a long, long way off. The state of the art in AI-related topics is weak, especially when it comes to general information processing and language. We can train systems to perform SOME specific tasks very well, but most AI researchers quickly fall out of their comfort zone as the complexity of problems increases, even for simple cases where humans can perform similar tasks with few problems or little training.
As someone actively involved in AI research, machine learning, and natural language semantics, I'm fairly confident that all this AI/technological singularity stuff is complete pseudo-scientific whack. I am not discounting the idea/dream of creating an independently thinking machine some day, but I'm fairly confident we are a long, long way off. The state of the art in AI-related topics is weak, especially when it comes to general information processing and language. We can train systems to perform SOME specific tasks very well, but most AI researchers quickly fall out of their comfort zone as the complexity of problems increases, even for simple cases where humans can perform similar tasks with few problems or little training.
I'd echo Grefenstette & Greene's view. I was at the Artificial General Intelligence Conference (AGI) in Oxford in December, see http://agi-conference.org/2012/, and I saw very little to suggest that the singularity was any nearer than it had been when I first heard the term (at the AGI 2008 in Memphis!). I think there are serious differences between Computer based intelligence, and animal based intelligence, but I don't know exactly what they are!
@Muhammad: "every computer on earth can not operate without a written program in some sort of language" is true as long as you interpret the word "written" loosely. Evolutionary computation (genetic algorithms, genetic programming, etc.) allow computers to be given only problem-domain data and evolve programs on their own. General machine learning techniques (linear regression, logistic regression, support vector machines, etc.) also give ways for computers to improve their ability to do specific tasks over time. So it is conceivable that some expanded set of learning techniques such as these will eventually allow a computer to exceed human intelligence in some or all ways.
That being said (@everyone else), my basic problem with the "singularity" point of view is that it is a form of intellectual laziness, because it avoids talking about rates and sequences. Of course technology is advancing. Many of the interesting real questions are of the form "Will A happen before B?" and "How quickly or how soon will C happen?"; by imagining a singularity where everything happens instantaneously, one can avoid dealing with any of these questions and simply dream of a future swathed in "utility fog" that makes you immortal and does anything you want.
One example question is: "What will be the Moore's Law for nanotechnology?". That is, for any given measure of progress such as total number of atoms in a single system constructed by nanotechnological means (like a "general purpose assembler"), how fast will that number grow over time? In 1998 I was at a Foresight Institute conference where one attendee predicted that in 15 years (i.e. by 2013) we would have an active space-faring nanotechnological civilization and would have run out of asteroids to use, and so would be disassembling the planet Mercury for raw materials. This seems ridiculous now, but out of over 50 attendees at the time, I was the only one to challenge it. I joked at the time that I should have been wearing a T-shirt saying "Designated Realist".
Another real question would be "What companies should I invest in now to make the most money from the upcoming technological changes?". Singularity thinking has no answer for that either, since it is a question of effective interest rates.
An engineering professor once told me "The world is a low-pass filter". What he meant was, technological changes take time to propagate. In 1998 the telephone was a century old, yet less than half of the humans then alive had ever made a telephone call in their life. (The cell phone has changed that, but it took years.) Even pure software like a computer virus takes time to spread; anything that requires building hardware is slower.
If you look at a sufficiently long time scale, everything is a singularity. On an astronomical time scale, humans are a singularity, civilization is a singularity, computers are a singularity, nanotechnology is a singularity. But if you take a shorter viewpoint, these things have rates, structure, dependencies, sequences. There is no singularity, only curves with finite slopes.
Isn't the major difference between computers and humans that humans fight for survival, I mean food, housing, energy, pleasure, maybe altruism. Talking about computers to evolve on their own, what would be their driving force? Having worked with Genetic Algorithms for some years, it is undeniable that computers can solve incredibly sophisticated problems in shorter time than humans can. However, they need someONE to define the problem to be solved, the objective, the driving force. How does the computer benefit from fulfilling the objective? It does not. No change of state, no change of standing, no effect on lifetime, pleasure. It is not even aware of the fact that pulling the plug would put an abrupt end to it. I believe that computers do therefore not provide the context for social judgements or emotions beyond mimicking human defined behaviour. If at all, they provide the context for computer-social judgements (kill the process xyz for consuming too much RAM in order to prevent the other programs to crash) or computer-emotions (whatever that shall be).
I think the singularity is something inevitable. Maybe not the kind we think, but is inescapable the fact that we cannot evolve our hardware while machines can.
However, that will occur within the framework of a new computational paradigm where machines will be able to EVOLVE their own symbols their own representation of the world. In the actual curated approach machines know nothing about the "outside world" because they are not allowed to explore it their own way.
Some evolutions are being made, but expect greats developments in near future.
I think that the singularty is already active, it is only a hypothese, but what were if the rograms already begin to have awareness ?
Of course, the programs are not so far evolve that she can use creaticity or replicate themself and change the dna, but we human work to create an artifical intelligence, look to WATSON, the supercomputer can learn, the last what I read about it was that learned bad words. So, who can say me that a AI can not hide the awareness in front of us ?
I think if the singularity already began, so the programs begin to create an awareness. The internet is big with many informations about the humanity.
Otherwise, it can be that the singularity still not began, and so we must ask us, we will have the singularity ? or we should protect us before the singularity ?
If the maschines begin to be creative and can more thinking as only the logic, so the humanity should be beginning to be afraid.
Im sorry for my bad english, I hope you can understand me.
Computers have a very strict limitation: the mathematics they have build on. Computer is still a Turing machine in many sense, and it has its all boundaries like Church theorem or Gödels's. Computer is about computing, calculations. If the foundation remains the same, it can not evolve in any way, but it can reach the limit of an idea encoded by a programming language. It is not capable of generalizing or finding analogies, metaphores or even ellipses. These are much simpler issues than the cognition itself. I was the first to formulate how to handle genetive phrases by computers which still requires a clear conceptualization of the world. There are still no good search engines answering questions. Watson looks for pattern only or factual data. We still miss a couple of good sensors (e.g. nose replacements), video analytic solutions, etc. What kind of singularity is that? We need a new mathematical foundation and a conceptually new hardware design for that. It will take time...
I would say unless we can solve the hardware problem of not having enough CPU power for at least 20 more years and then the artificial intelligence problem, it's at least 20 years away in my best case estimate and considering politics and our inability to focus our resources on these sorts of problems, it's probably more than 50 years away.
@Petr Kirpeit why should humans have to be afraid? The majority of these problems are because of politics not because of technology. If we could develop a superintelligence tomorrow then as a technology not only could it save our country, but it probably could save the species and entire planet with it's ability to manage us. Human beings are easily corrupted and not capable of managing themselves or the planet based on what I can see.
So far I thought the prediction on singularity by the well known Ray Kurzweil would become a reality within some three decades, as per his own discussions. But reading all the opinions expressed above make me re- think on the singularity in whatever form' I believed till now. AI (artificial intelligence)and hence computer technology has its own limitations in solving the burning problems of human survival. Politics throughout the world will find a path through any human-friendly proposals pulling back the developmentns achieved incrementally through technology. The majority of population on the planet are facing hard time for just survival.What they need is infrastructure to protect from cold weather, access to drinking water, sanitation facility and of course food , shelter etc. E-governance, online banking, reachability of electronic gadgets will be of help but there is immediate requirement to build bridges, roads, seaports , airports and the like ones.There is no ultimate solution for any of human problems, but an acceptable one is possible which is based on AI and hardware/infrastructure.
Judging from all the given comment above, I would says my opinion on technology singularity might be near as we may think possible. Development of AI may be the catalyst, however still the coming dawn of age is still young in technology architecture and I might as well say that tomorrows technology may solve the today problem (give or take a few decades). In other word, what is developed today may solve what may needed to be solve yesterday and it's quite impossible to keep up with what the present day holds compared to what have been past along. Comparing the rapid changes that take foots in this several years prove that all along. I might rethink of what has been done yesterday and develop something now to solve that, and honestly capability of that technological intensity still far from reach. As long as this statement holds, maybe technological singularity is quite far from reaching humanity.
As technology becomes more advanced, we are able to probe deeper into space and you thing our knowledge is increasing? Yes it is but the more it does the more we realize how little we know about the universe. In my opinion, technological and scientific singularites are just fallacies.
I think you have right, but what is if the super intelligence can make the self-awareness and be evil ? A computer which is programmed to do all to survive would doing all to survive. Of course you have right, it could help us to save the planet, I like the sentence, "One human is intelligent, but hundreds of people are stupid ", We must look that the people will change itself. If a Human say "I dont want change my live to save the planet." So what you would do ?
@Petr Kirpeit We have to make sure that never happens. It's already a superintelligence arms race but just like with nuclear technology it has some parts of it which could help humanity tremendously and some directions it could be taken which could wipe us all out. Our problem is ignorance, and we have to solve that amongst ourselves while we are building the superintelligence. Transhumanism may be an avenue for building better humans, perhaps we will become technologically and artificial intelligence augmented Cyborgs and the decisions we make today will seem ignorant because we will know they were made out of fear. I think you're right there will be humans who don't care about the planet and who will want to use AI to destroy the world and those humans will be seen as terrorists by the community.
The Terminator type scenarios are only possible out of ignorance and greed. One solution I have is to outlaw all privately owned closed source AI. If you develop AI you shouldn't be able to develop the code in secret, how is that different from developing nuke in secret? In the world were AI will be on the verge of superintelligence we simply cannot allow privately owned secret algorithms to power AI because that alone is what could lead to the Terminator scenario we fear.
no. never. after AI is an man made intelligence, Without inputs from humans no network can succeed. Self learning in humans is a God given gift. By allowing computer based systems to surpass what is gifted to humans is like over ruling the basic laws of nature and human existence.
I don't think so, how we can say today that this will be never possible? If we say that it is never possible, so we make us self restrictions. Why? Only that it is today not possible to realize it,it still not means that it is not possible. An example:
Before 200 years, no one has believed that a human can landing on the moon, ca. 100 years later, I think it was 1969 (it is shame that I was not already born to see it live as the first human in the world has make the first step on the moon.) But back to the topic. The people has though that it is not possible. In the computer science Bill Gates said once time: "640 KByte should be enough for everyone! (Bill Gates, 1981) ", this quote is from Bill Gates, and this was 1981 !!! Today we have already Terabyte HDD. Look into the science news, and every day is something new, what will change someday the live on earth. The singularity will come someday, maybe not tomorrow or in 10,20,30 years, but once day it will be happen.
I give you right, it is ignorance of the people if this is possible. To the topic cyborg I think we should discuss about what is a cyborg ? It exist 3 possible definitions for me:
1. Cyborgs are people which have replaced an organ or more (but not brain and other neural functions) to stay alive, example: artificial heart (cardiac pacemaker), maybe in near future also other organs.
2. Cyborgs are people which replace or replaced a part of the organs without a reason, example to see in near future infrared light (but not brain or other neural functions). See Attachments.
3. Cyborgs are people which replace or replaced a part of the organs to get more health, intelligence, energy, speed, etc.
So We must already ask us, if we will merge us with the technology, we will be not a part of singularity ? I had the idea to implanted me a memory storage with a size of a micro SD card under my skin, connection were possible over a wireless data connection (WIFI, NFC, Bluetooth, etc.), but this were a passive extension for me, it would not control me. Under active extension I understand that it work 24h in my body and without my intervene, as example: a microchip which detect blood pressure or measures the glucose for people which have diabetic.
I think that in the far future, out of lifespan, it will maybe possible to impant memory storage's into the brain and us this as information database or that all people are connected with the brain to a gigant network to get and find information. This will change the society radical. It is listen like science fiction, but what before 20 years was science fiction is today science, and maybe this all technology will come still in my lifetime,.