See link below for details and or visit
https://www.facebook.com/pages/ArtilectOrg/424121604333175
http://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html
Of course, learning capability can be programed. Perhaps every human capability can be programed, but I am talking about learning without being programed, or more accurately, the capability of understanding, learning and any other intelligent action without being programed. Consider that if we accept the evolution of animals from unicellular beings, the intelligence has been obtained by evolution, that is to say without being programed.
For an intelligent system, a program is mainly the result of an intelligent action and not its cause. Those machines which need to be programed are not intelligent systems, they are only products of intelligent systems. This is why to be smarter than a human, we cannot consider systems that need to be programed, but evolutive ones learning from its own experience.
The couple computer-human is similar to telescope-human. A telescope sees nothing without a human behind. A computer thinks nothing without a human behind. A computer does not know what the symbols in his memory could mean.
Then what is the meaning of a humanbeing...? If we give entire control to a robo
Personal opinion, thinking biology and 10 trillion cells, it is not going to happen as far as i can see in the future.
Creation cannot be Smarter than Creator, Otherwise it is not a Creation. Machines can be smart and efficient but they will always need human intervention to work properly and smartly.
Of curse ! One reason refers to artificial societies theory ....
Gosh, I don't know! For that matter, should we allow children to be smarter than their parents?
Seriously. Is this a joke/YouTube?
Gell-Mann's Totalitarian Principle states: "Everything not forbidden is compulsory."
The ascent of man mandates it.
Human existence is temporally bounded by extinction or evolution, so Yes! machines are the way forward.
Of course we should make smart machines. I can then retire, swim and eat grapes all day ,while the machines do all the hard work.
I think human-made machines never could be as smart as lord-made humans! I agree with Muhammad Javed : "Creation cannot be smarter than creator..."
Creation does not be to be smarter than the creator for me to have smart machines that can do all the dirty work.
I think that they can't be smarter, they can be faster, they can think faster, calculate faster, but human can create, that's why human will be always smarter.
Machines will never be smarter than humans.
This will happen only if humans will be more stupid and will deny their mind abilities
Cyborgs concern me more than machines.
By increasing brain function using "implants" we could make cyborgs smarter than humans.
Maybe not now, but sometime in the near future.
And just think, humans can be evil, but cyborgs could be worse.
No. that then would be the machines that we monitor to us. The solution is that before that happens, we integrate with them. Soon we will have to was artificial neural networks in our own brains, as a prosthesis over the already currently.
If creation cannot be smarter than creator, Einstein's mom must have been one *hell* of a theoretical physicist.
People! Judging by this comment thread, I think it's clear we are already perilously close…
You bet. We are thinking machines and our "intelligence" is a social construction, isn´t it? Most machines are (already .. now .. not in any future) smarte then most men.
To answer this question previously it must be defined an ordering "
Humans are of different category "LAZY, HARDWORKING, MEDIUM ,etc etc.."... All Humans are not perfect in this world. If that is the case when we expect a Machine to be smarter than Human, what if Machine goes wrong ? May be nobody would be able to control the machine at that time. Let Machine be a machine, and let us make machines work like human beings but never smarter than all human being. And let all human being try not to be too lazy :-) Tats my opinion !!!
@Isaac Wolkerstorfer yep we should allow our children to be smarter than us..
Nice to see number of fellow giving their views about this hot topic, but as per my
consideration , we should allow the machines to become Smarter than human because
in some of the conditioned as human begging it will be very difficult to control the condition ,
for example fire in building or cyclone . In such condition human has limitation to handle this condition,
so we will provides a machines to operate in this condition with some intelligent behaviors.
But we must have control on it.....
It should be smarter than Human. Because humans have some limitation on working with precision, hazardous environment etc. So, I believe the machines should be smarter enough and also within the control of human to assist him in doing all other things that is almost impossible to humans.
"In mathematics the art of proposing a question must be held of higher value than solving it. "
Georg Cantor (1845 - 1918)
The question proposed by Konrad is important; but cannot be answered by means of opinions. Science is not a collection of opinions and it is worth keeping researchGate free of opinions. My advice is that you must not write what you cannot prove. Several centuries ago Galileo said.
"In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual. "
(Quoted in Arago, Eulogy of Galileo (1874))
According to the above Galileo's quotation, if you want to carry out some scientific work, then you must write opinion-free texts. Before saying any thing about smartness you must state what the term smart means in the corresponding context. Nowadays, there is no machine, what we have is the symbiosis machine+programmer. A machine without the program performed by a human is nothing. Thus, to analyze the question, you cannot think about Turing machines or computers. To compete against humans a machine must have a self-learning structure; therefore they cannot be compared by the tasks they can carry out, but their potential actions. For instance, the Einstein's brain when he was 4 years old had no mathematical skills, but potentially was smarter than any computer; accordingly every comparison must be stated from a evolutionary view-point. The question is very important and very complex to be solved in a few paragraphs. But some ideas can be written provided that they are opinion-free, and exposed together some kind of proof or at least some illustrating instance.
I don't think so, because if machines become smarter than us, they [machines] control us, and that would be a disaster to the humanity.
So I think we don't have to allow machines to become smarter than us.
Because we need to control machines, not machines to control us.
As I have said, before comparing it must be stated some comparing criteria.
Unlike other machineries, intelligent systems can do any task they can describe. For instance, if you can describe how to solve an equation you can solve it too. However you can describe how a clock works without being able to do the clock work. Indeed, the designer of any machine can describe how it works; therefore the machine cannot be smarter that its designer. It can work quickly, but not smarter. However, a machine can be smarter than a human who ignores its working way. After these ideas, for every pair of intelligent systems X1 and X2, we can state the order X1 < X2 whenever X2 can describe the structure of X1. Under this order, if a system X have a
Dr. Palomar: What you have posited is actually a move toward more process oriented methodology to treat facts by phenomenon of proper reasoning in order to reach to some definite conclusion or say, solving an unsolved problem with illustrations from the mathematical point of view. It may be said that reasoning is more receptive than opinions, but very often, opinions do point out facts that lie underhidden assumptions which can definitely shed new lights on an old problem.
You have cited an example of how Albert Einstein's young brain was more smarter than computer, and indeed that was, yet i also believe that everychild in this world ever born have their brains smarter than computers. The differnce in such intelligent assumption is the lack of equal opportunities being provided to those poor citizens of the world, still impoverished, malnourished and without any formal education, unable to afford those finer technologies that shape the minds of the people in affluence. Yet still, we have seen geniuses spring from those environments. So, the environment, actually, the learning structures are unequally distributed among equal minds born with equal abilities.
When considering that computers can be fed with unlimited amount of data to process information, the machines have at their disposal all those learning technologies yet, the process of processing such data is still evolving. And so are problems with those models at our disposals that design those learning technologies, complexities associated, and precision needed to execute such. And i think, here lay the importance of differing opinions which may further sharpen out dimensions of thought.
Opinions are important, and opinion gap between scientist are no good. There lies the importance of experts, or even non-experts, who often throw varied opinions, and it is the duty of the jury (scientists) to validate such opinions.
"The one common experience of all humanity is the challenge of problems." - R. Buckminster Fuller (1895 - 1983), American Architect, Author, Designer, Inventor, and Futurist.
The models of nature are complex, and the complexity of our mind help to define such complex models of nature in simple intuitive paradigm. Larry Wall said, "Using a simple tool to solve a complex problem does not result in a simple solution."
Even Einstein once meant through his quotes that, one must think of ideas that are unthinkable and hence, look absurd. Attempting to solve such difficult problems that often seem to be impossible would carve out the ways of possibilities to reach for solutions.
Einstein said,
"Intellectuals solve problems; geniuses prevent them." - Albert Einstein (1879-1955), Physicist and Nobel Laureate
"It's not that I'm so smart, it's just that I stay with problems longer." - Albert Einstein (1879-1955), Physicist and Nobel Laureate
Link:
http://c2.com/cgi/wiki?EinsteinPrinciple
http://www.decision-making-solutions.com/problem_solving_quotes.html
So, in some way, it may be true enough to say that opinions are important and do matter, yet, it is also true that problems cannot be solved just by opinions.
Machines should not only become smarter than humans, but also more benevolent. It may be possible to "evolve" an intelligence that is smarter than it's maker. It depends on the kind of process that is used for this type of development.
We CAN NOT build and support a machine that is smarter than human!! Could animal create some thing that is more intelligence than them! of course not! So, the pitch of human technology may create some thing that is similar to human but I believe They CAN NOT do this!
Dear Sidharta,
I understand that what you call opinions, in fact, are what is called working hypothesis, which are the motor of any research. Indeed they are necessary. By contrast, what I call opinions are those claims based in personal preferences and exposed as if it were true, and not as propositions which must be tested or proved. To avoid confusions, working hypotheses must be exposed in a conditional style and by no means with the solemnity of a proved statement. In addition, sometimes we can read opinions which have been exposed previously by others contributors. The repetition of opinions seems a votation, and scientific statements are not accepted by votation, but proofs. Of course, I agree with your post, the divergence consits of diferent use or the term opinion. This is why I have also said in several occasions that it is a good practice to define the key-words of our texts.
In any case, exposing opinions which do not contain something new are a lost of time. This is why opinions must be exposed together with some explanation of their utility or opportunity, otherwise the are again a lost of time.
There is already hardware that is more powerful than the human brain. It's called the K computer. http://www.scientificamerican.com/article.cfm?id=computers-vs-brains Good software to profit from this hardware may be the only thing that we need to create a super-intelligence.
If now is a joking time, I must say that woman's brain never will be surpassed by machines, fortunately it will be allways a mistery.
Isaac Asimov never said that robots will not harm people, but only that they shouldn't. Military drones and robots do not kill people. The bullets that they fire upon man's command, or by man developed software instructions kill people.
I guess in this century will be created one or more machines much smarter than human. The technical progress can not be stopped. Problem is how to defend humankind from machine with diabolic functions.
As I know today, until now no one country doesn't has such type of laws.
We already have a devil. We do not need a new one. Either natural or artificial. We should strive to make more benevolent things.
Machines serve their purpose well, they do long, boring and tedious tasks for us, if we give them too much intelligence they might refuse to do these tasks.
"The alternative is that civilizations don't last very long, but destroy themselves." ~Stephen Hawking
We have very simple question"Should we allow machines to become Smarter than HUMANS?
"Here is very simple answer: We do'nt have right to allow this" Many people can think: we have a long time for this event. In my opinion, defence laws must be created by goverments of different countries right now.
I agree with Yuri We do'nt have right to allow this
but for our learning research in this direction must be continued
Prof. Palmer,
I can somehow foresee in this "brainstorming" session that someday, robots with intelligent "brains" will definitely "storm" our civilization(save me for some humour)!!!
I do agree with those views that you have presented as rational reasons of why such old paradigms should be revised and instead, there should be some new attempt to accommodate such new myths of the present day. Contrary to what the three laws which are assumed to be of some real benchmark for future designs of artificial minds, whether in robotics or in systems simulation, that has to accommodate those new aspects of Strong AI, in most robust sense, still leaves enough ground for the flourishing Weak AI based robotics on determinant, fuzzy logics to convey such introspective inquest into self awareness and complexity of the former, with respect to the latter. So, keeping aside those fantastic realisms of the future and other alarmist AI theories at the bay, i would like to move on to the real shore where the war in virtual realism has begun. That is to say, a look deeper inside the applications and implications of such practicalities related to the problem of self awareness in robots, and in addendum, the potential use of AI based machines for other need-based applications.
This invariably germinates few things in a cultivation of thought. And if we are moving in haste, too far beyond our present realism, i think we are becoming unduly expressive of those threats that may be deemed too premature, and so, we can keep aside for the time being, the probable reactionary vengeance that would lead to causal-effect scenario that would lead machines think of humanity as a threat, or the reverse thereof.
However, and in such contention, there remain still some essential elements of assumption about the complexity of such systems, measure of their intelligence and representation of artificial consciousness. As far as the history contends, and on Assimov's presentation of the classic example of weak AI, based on which those three laws were formulated, and now, the time is ripe enough to go beyond such limitations for the sake of Strong AI. For exemplary purpose, i can mention here about Alan Turing, back in those primitive days of artificial intelligence, laid the groundwork which paved the way for mathematical investigations of computability(Andy Clark). From then onward, the question of intelligence and complexity both became scientists' tool and our (layman's) nightmare. Three propositions can be presented on this scenario-albeit in common order:
That,
---All intelligent systems are complex
---But not all complex systems are intelligent, and,
---Both intelligence and complexity define a system's representation of knowledge.
Contrary,
A system may not be intelligent, yet can be very complex(viruses)
Hence, the philosophical theory of rational reasoning generally specify as of who is in control of the technology and intelligence behind those complexities. If it is woman or man, we would not need any Turing test, but if that is machine itself, we would always need to carry a 'Turing meter' in our handbag in future which you have rightly foresaw on your experience with your Transputer Development team at the Caltech or JPL, that which would have by this time outsmarted human beings. (referring to K system)
Thus, if those alarmist theories are ever to become reality, when androids would endeavor to outsmart human, they would invariably device the fateful concept of immortality through self-replication. I find such a future bleak.
Collaterally, what can be envisaged upon is to develop positive attitudes toward such system and leverage their superior technical skills with precision for human benefits-say, using AI based robots for deep space exploration, exoplanet terraforming, alternative space resource harnessing and to make other planet habitable for us, since we are already counting the days of our planet(global warming). Or even if, say, employing such intelligent agents to attempt to reverse or counter global warming can be a viable idea. I am more optimistic on this, and pessimistic on those former.
Lastly, we would perhaps need to allocate resources (computational as well natural), both, to design and measure intelligence of those objects of human creativity. There are many compelling models to perform such, i.e., Kolmogorov complexity-a theorem to measure systems intelligence by allocating computational resources needed for such maneuvers. I am hopeful of other such theories being considered in parallel!!!
Dear contributors,
Take the more powerful computer. Launch an application by means of which the computer draws triangles at random and measures their sides and angles. Suppose that the application contains some routines to calculate, sums, products and roots. Suppose that the application can only do those algorithms that were known before the year 300 B.C. Launch the application and wait until the computer finds out the Pythagoras Theorem. When the computer gets this theorem let us know this wonderful fact in this web page. Of course, not every man is able to do such a prowess; but from time to time one among billions can do it. In fact the human intelligence is accumulative and collective. Dogs also work as a team.
Dr. Palomar
how long would that take for a computer? or am i fool enough to ask this!!!
Dr. Palmer, I can assume that i would better understand the gravity of the scenario that you have mentioned. Thanks.
Dear Sidharta,
The question was rhetoric. I think that the computer would take an infinite time. With an infinite time a stupid agent can do every thing. For instance, a simple hash table together with a searching algorithm can solve every problem as follows. Suppose that the table is infinite and possesses in its first column every possible problem followed by the solution at the same row, but in the second column. When introducing a problem, the searching algorithm finds the row at which the problem is and in the same row but in the second column gets the solution.
Of course, such a system is of no use, because the search could be infinite. However if we can generate this table from a finite subtable, then the search is possible. My working hypothesis consists of assuming that a system is more intelligent when can generate a table problem-solution from a smaller subtable. In fact, the matrix of every Turing machine is a generator for the set of all possible input-output collection.
To illustrate this fact consider that when an human learns a language, from knowing a finite sample of sentences can understand an infinite set of them. In algebra such a machinery is called generating system.
Turing machines do not create their own generating matrix. The matrix is the responsibility of the programer. I can prove that to create generating system human mind uses analogies. By contrast, computers work with identities because they ignore the meaning of the symbols they handle.
I do not say that other kind o machines in the future can handle analogies, but it is not the case of Turing machines. In addition, computers have an address based memory, while human memory is based on analogies.
Finally, I see some analogy between the couple programer-computer and the couple telescope-observer. Telescopes make that observers can see more, but telescopes themselves see nothing. Computers make that humans can think quicker and smarter, but computers themselves are not able to think.
People can create systems that mimic systems from Nature, but is it safe to assume that we can never truly replicate Nature ?
We already have replicated some parts of nature. This is called synthetic biology today.
http://www.guardian.co.uk/science/2010/may/20/craig-venter-synthetic-life-form
Absolute claims contains always a lot of risks. But you can claim and prove what actions can do a system the structure of which is known. For instance see the Arbib's book entitled "Brains, Machines and Mathematics".
Nowadays machines do not know the meaning of the symbols they handle; therefore they are systems which ignore their own actions. I not mean that, in the future, machines cannot know their own structures even enhance them. What I mean is that those machines that do not know the meaning of the symbols lying in their memory have a blind intelligence which cannot work without being driven by a human, say a programer. In any case, consider, that our knowledge about our own consciousness is rather small or even ridiculous. Perhaps, before constructing machines being smarter than humans, we must be able to build machines that can do any intelligent action that humans can do.
In any case, we have already a method to replicate human brains, which is very pleasurable, and requires the action of both a man and a woman.... I am very pleased with it.
Perhaps a machine should get a rudimentary form of "free will" before it can become intelligent in the human sense. "Free will" is that what makes one accountable for his/ her / it's actions. A machine that cooks up something by it's own "free will" can really be called smart, rather than only intelligent or a fast symbol juggler. Animals do have a form of free will. But paraphrasing Orwell, we notice that "some animals are more free than others."
Dr. Palomer:
The symbols the computers are fed as programme are given some meanings which a computer understands as machine codes. So, such codes are infinite and so are as such means of attributes. but the real mechanics of programming perhaps lay in the science of designing such programme that generate codes. powerful search engines apply those matrix based algorithms that we are quite aware, based on symbol manipulations and binary systems. Programs are initially given some structures that we can perceive as 'structured', computers do not. They just follow certain routines embedded in such algorithms which gives it a form or a shape that we can only comprehend. Now, what you have mentioned about such matrix generators which human use as analogies. And that's fine. but how does that make sense if i assume that computer ignore the meaning of the symbols that they handle when they have addressable memories? so how does they associate the instructions with rules if one does not apply analogies?
Now, if one require the machines to work outside such programming guidelines, which AI systems generally are supposed to do in future, they must be applying "if this", "then that", not 'then what' i suppose. This 'then what' would force the machine to dwell out of the rules of rationality, and then, it can generate matrix and start searching for correct rules, offcourse, using analogies. and if those rules are correct for them, and not for us, who's going to answer?
For this there is only one way. We have to impliment the human genetic theory in development of thinking in our humans should be injected to robo. Means the ideas/thoughts of a person will come from the surroundings as well as the human genetics of his famly blood. As a robo it will not have any bad genetics until it has been treated by any other bad authorities.if we make the robo to learn the good things through the implimentation of artificial genetics neural network it will be solved.
Dr. Chatterjee
The example in my previous message shows that a stupid system consisting of a table containing problems followed by their solutions together with a searching algorithm can solve every problem lying in the table. However, even when the table contains complex differential equation that most humans are not able to solve, the machine can be considered stupid. If your criterion is based on the problems a system can solve this stupid system is smarter than most humans. I work under a different criterion, namely, what I do not consider stupid is the author of the table, namely, the programer.
In addition, you have said that a computer understands the symbols as machine codes.
The only meaning that machine codes have for a computer are comparisons, and substitutions, but when a computer solve an equation it does not know what it is doing. An algorithm, from the view point of a computer, consists of assigning a symbol sequence S to a problem, and by substituting symbols in S, according to some substitution table T, the computer obtains another symbol sequence R which denotes the corresponding solution. However, the meaning of both S and R are only known by the programmer and they are always a convention, Since the computer does not know the meaning of S and R it cannot build the substitution table. Of course, the problem S to be solved can be the construction of the table T , and in this case the computer imitates the programmer. In any case, this algorithm requires another substitution table, and so on, that is to say, the work of the programmer cannot be avoided. To avoid the programmer actions, machines would be able to create algorithms from scratch.
I do not say that this is an impossible fact. My claim is that a human-like intelligence system must be able to understand abstract languages, and understanding an abstract language consists of being able to assign an analogical representation to any abstract-language sentence and vice versa. For instance, consider a system having the capability of describing in English what is happening in a movie. You can try creating an application for your computer describing any movie-like picture sequence, even those sequences you have never seen before, which must be described by means of sentences that you have never read before. It is a very complex task, but such a description can be carried out by any 6 years old child. Nevertheless, suppose that you can build such an application. It would be a poor achievement. In fact a child can do more complex actions. A child by observing how other men describe image sequences learns to do this task himself. To describe the underlying algebraic structure of this process is a very complex aim which cannot be stated in a few paragraphs, but I have investigated this topic for a long time, and I have find out a few questions related to this topic.
@ sir palomar plz post the few question that u have finded related to this topic.. It may be helpful for us
As I have said before, it is a very complex task, which cannot be stated in a few paragraphs. However as soon as possible I will upload to my page in this site a pdf file devoted to this topic. Nevertheless, some questions and suggestions can be exposed in this thread. For instance, my suggestion about how to compare intelligent systems consists of measuring the economy of necessary data to find out structures and patterns, disregarding the computing speed or the size of the problems they can solve. To illustrate this topic, consider the following facts.
Let A, B and C be three Englishmen and suppose that they have traveled to China for some months. A, B and C have identical activities in China. Suppose that once they have passed 6 month in China, they are able to understand a set E of Chinese sentences. Suppose that A has needed to know a set of 200 sentences to be able to understand every sentence in E, while B has needed a set of 500 sentences, and finally C has needed to know the whole set E. Who do you think is smarter? Indeed A is smarter than B and C, and C is rather stupid; because what C do, it can be also done by a hash table together with a searching algorithm, that is, a Turing machine. The human A is smarter because he has needed less data to get at the meaning of each sentence in E. Needing less data is nothing but a data economy. The A' s brain organizes the acquired information in order to find out its underlying structure, and once he knows the structure he can determine those patterns by means of which the whole set E can be generated from a subset of which containing only 200 sentences. From this view-point the main intelligent action consists of finding out the structure of any set of data to generate it; consequently, the more intelligent a system is, the less data it needs in order to find out the underlying laws and patterns. This is why my suggestion is “intelligence = datum economy”, and consequently more transistors need not mean more intelligent.
I am remembering that when computer memory was too much expensive, programs had to work with small RAM, and good programmers were able to build those algorithms requiring only a few Kbytes.
Dr. Palomar:
Your suggestion of “intelligence = data economy” is an interesting idea about finding structure of data sets.
And so, through those previous explanations that you have provided, props up another important aspect of machine intelligence attributes in data and symbol manipulation, as well, in assigning analogical representation to any abstract sentence, the point that you have very accurately explained which defines some aspect of translation and most aspects of representation problem in computer languages. Indeed, the computer do not have some 'unique, mental models of symbol manipulation similar to our brain, so all that it does is to assign some form of analogy. But that does not define the problem in some respect i suppose.
Particularly, some experimentations done with kids learning of algebra refer to such representation of key word-matching order while assigning meanings to symbols in solving equations. The good thing with computers are that they are good 'kids' of data compression, as well in decompression, due to their higher virtual memories, or RAM powers, and so are better pattern recognizers but not good at developing conjectures. Considering that computers have now both large long term and short term memories at their disposal, which we human lack for short term memories, and hence when solving problems, we need to become flexible when using symbolism.
Our cognitive limitation stems from these facts that, though we are able to store vast amount of long term information, we are often poor in manipulating those number of items all at a time. For this, as one would agree, we perform routinization of procedures, which is similar to heuristic procedures of tagging symbols to label information. That may refer to describing a symbol itself as a 'process'. Actually there is a term in algebraic literature called "procept", which you know better, but to state it in simple words, we may define it as labeling a process with a symbol and it has applications in random proceptual flexibility in algebraic symbol manipulation. So, if i run up to determine the way that syntax suggest a specific order, i would assign 'x' a meaning as a representation of an object in an equation say, 4x+3= 12. The importance is, both for man and machines, to retrieve methods that explain random choice of responses that correlate to, or match such patterns of word order. But what about those statements that do not have syntactic constructions? I suppose they also apply matching orders. And how does it differ from semantic translation? . Here also, i assume it has some important applications in translating phrases into algebra equations. That would be very helpful if you may wish to explain.
So, a very important question props up, as, how to define syntactic equivalency from statements using algebraic assignment formulation to define higher order complexity of statements?
Here is a nice talk about "compression" as the underlying algorithmic principle behind creativity and intelligence.
Jürgen Schmidhuber at Singularity Summit 2009 - Compression Progress:
The Algorithmic Principle Behind Curiosity and Creativity
http://vimeo.com/7441291
Dr. Chatterjee
Konrad Burnik has mentioned to me an article of his own in a private message (thank you Konrad). In this article he has stated an accurate definition of what a computer can do:
---------
"Computation as we know it, is merely a formal manipulation or transformation of symbols. It can be done by hand or by a computer. Either way, there is always a notion of a conciever and an executor present, when talking about computation. These two are usually one and the same, but I like to think about them as separate entities. The executor, follows a fixed set of rules to transform given string of symbols, that a conciever has conceived having some end goal in mind.
"(K. Burnik)
--------
As konrad has said, the symbol manipulation can be done either by a computer or by a human. The only difference is the speed and, perhaps, a headache. When I say datum economy I do not mean memory economy. Compression consists of a memory economy rather than a datum economy. When in a dictionary it is only necessary to write the infinitive of a regular verb and the remaining tenses can be deduced or generated is what I say data economy.
In other article Konrad has exposed an accurate analysis about memory searches.
My frequent claim is that human memory is based upon analogies, that is, searches are carried out through analogical objects. Of course, there are primary associations based on conditioned reflexes (Pavlov), but these are blind associations acquired from experience, but not from deduction.
I have proved in an article, which I hope to publish soon, that analogical representations are isomorphic to categories of sets with fuzzy subsets because both structures consist of Kleisli categories associated to some monads,
Following the Konrad’s ideas, I would like to expose the following example about searching objects, played by a dog.
Some years ago it was a noticeable fact the skills of a German dog. The dog was able to identify 30 objects by their names in German. The objects were in the contiguous room. When his owner asks the dog to search a thing, the dog enters to the contiguous room and fetches the object. In some occasion, the owner asks for a thing the name of which the dog has never listened before. However, the dog obeys and enters to the contiguous room; then he sees a new thing the name of which he does not know and brings it to his owner.
This fact means that, not only some dogs can search objects they do not know, besides, they can assign a name to them. In this case the unknown name. Indeed, a machine can solve complex differential equations, but they are far from searching objects, which they do not know.
P.S.
Surely, dog lovers will agree with me.
Thanks Dr. Palomar and Dr. Konrad.
Indeed, a very interesting example provided by Konrad. It shows how even dogs are sometimes smarter. thats the beauty of associative memory. I just checked Konrad's link on the video presentation by Prof. Schmidhuber that definitely indicates progress toward new data compression technology. Thanks to both of you for sharing such interesting insights.
Check this out:
Based on Pavlov's classical conditioning, new thoughts on animal learning methodologies, non-associative techniques.
http://www.examiner.com/dog-training-in-national/animal-learning-part-i-associative-learning
www.pyoudeyer.com/roboticsAndAutonomousSystems.pdf
Also, a treatise by David Mc Farland, on associative learning;
"Guilty robots, happy dogs: the question of alien minds".
ASSOCIATIVE LEARNING FOR A ROBOT INTELLIGENCE
www.icpress.co.uk/compsci/p113.html
Going on with the topic datum economy, see the following examples.
System T1: A computer application containing a subroutine, with the identifier "reverse", transforming each list into its reverse. For instance, denoting its action in a function-like style, "reverse [a,b,c] --> [c,b,a]".
The system can return the reverse of every list, but the merit is of the corresponding programmer. Thus, from a finite routine it can return the reverse of each member of an infinite set of lists; hence there is datum economy in this system.
System T2: A system consisting of a memory with a searching algorithm. When in its input occurs the phrase "reverse [a,b,c]" the system finds in its memory the pair (reverse [a,b,c], [c,b,a]) and returns the second coordinate, that is, [b,c,a]. However, if the input is "reverse [1,2,3], and the memory of T1 does not contain it, the system cannot return any output.
The system T2 can only return those lists contained in its memory; there is no datum economy in this system, but the system is not programmed to reverse lists. and its action consists of the simple repetition of what it sees.
System T3: The system consists of a memory and a searching algorithm based on analogies. Suppose that its memory contains the pair (reverse [a,b,c], [c,b,a]), but it does not contain (reverse [1,2,3], [3,2,1]). However, when in its input device occurs the command "reverse [1,2,3]" by analogy it finds the most similar input in its memory, namely, "reverse [a,b,c]". Comparing the difference between them, the system deduces that the function P defined by a --> 1 , b --> 2 and c --> 3 transforms [a,b,c] into [1,2,3]; therefore P transforms the considered input into another one lying in its memory. Once it deduces P, uses the inverse function P^(-1) to obtain the output [3,2,1] from the output [c,b,a] of "reverse [a,b,c]. This system can generate outputs which its memory do not contain. There is datum economy, because from the single datum "(reverse [a,b,c], [c,b,a])", it can return an infinite set of lists.
Unlike T1, the system T3 can work without the help of a programmer, but it is required the capability of deducing transforms by comparing lists.
Now, analyze what every system does. T1 can return the image at every argument of the function "reverse" using a finite routine. However, T1 is a stupid system that must be programmed by a human.
By contrast, T2 and T3 need not be programmed. However, T2 knows nothing beyond what its memory contains. The actions of T2 consists only of repeating what it has seen before. By contrast, T3 uses its memory as a generator for analogous outputs. The system T3 need not be programmed, besides, it enhances, by means of some transforms, what it has seen through its own experience. There is datum economy in the structure of T3 and it is a self-learning system. T3 is the smartest of the three described systems. It is a creative system. The only inconvenient is that T3 must be able to handle transforms. But transforms are first class citizens in human mind. We are always observing transforms. The shadow of a coin goes from circle to ellipse depending on the light slope. Hairdressers transforms the woman looks. The animals and the plants grow. The objects move from a position to another, and so on.
Concepts are nothing but what remains unaltered under some groups of transforms. When watching an old snapshot you point out some child and say; this is me, the concept denoted by means of the term "me" consists of all features which have remained unaltered among several years. The concept of distance is related to those properties which remain unaltered under isometries, and so on.
Now, consider a dog the owner of which orders him to take away objects. Suppose that the dog understands the command "take away the hat" and the dog know what is a key. To understand the command "take away the key" the dog must apply the function "hat --> key" at this command. This is nothing but the kind of transform that the above system T3 can do. Of curse, dog brains can do some transforms enhancing those data stored in their memory. Dog brains are smarter than computers, but, obviously, they do not surpasses the intelligence of the symbiosis computer+programer.
Dr. Palomar, I could'nt find the phrase even after a lot of google search since ever you mentioned as "datum economy", the phrase as a concept so is likely to become a new topic in systems reseach i would suppose. Well, its sounds dizzy...yet interesting. Is anybody working on this?
Dr. Chatterjee,
I have used the expression "datum-economy" in a colloquial style to avoid more adequate related terms as "generating system", G-Universal arrow, structured arrow etc. If the text was part of a scientific paper I would have used any of these terms depending on the context; since my work gravitates always around categorical algebra. But in a discussion of this kind, my only intention was to express the following algebraic fact: To provide a construct, that is, a structured set C it is sufficient a generating system G for it, whenever the receptor is able to build the whole set C from the generator G. Almost every intelligent process of synthesis, abstraction and enhancement is built in this way, in order to avoid handling large sets whenever they can be generated by small subsets. Of course, this strategy saves a lot of information, and this is why I have termed it as datum-economy. But it is in this thread the first time I have used this notation. Indeed, in my previous examples, the difference between a stupid system and an intelligent one consists only of this kind of economy. It is my opinion that the more intelligent a system is, the less information it needs in order to learn.
It depends on the field...................r computers not smater then u in maths??????
Dr. Palmer,
Well, it took quite a some time to figure out what answers could be lying hidden under your questions, so sorry for such a late reply. But i am still confounded on the matter of defining vector spaces in terms of quaternions, and any such possibilities do exists i suppose.
Well indeed one would agree that the flexibility of the fuzzy logic system stems from the fact that it invariantly includes quaternions sets. Yes, Maxwell scalar equations are defined in terms of quaternions as also the kinematics in space. This arise due to the spectacular algebraic properties of the Hamiltonians where, dimensionality of vector spaces could be increased beyond complex or hypercomplex number systems, which gives quaternions more flexibility. This particularly happens when suppose moving from the domain (traversing) of the real world into the world of imagination(imaginary numbers) defined by vector space which provide more flexibility that can be incorporated into fuzzy sets.
Considering the most important expression in quaternions as:
a1+bi+cj+dk, where, a, b, c, and d are real numbers.
And the basis element is 1.
Now, in multiplication of quaternions,
i^2=j^=k^=-1.
Where, i, j, k are basis elements of H, and -1=ijk, a possible six combinations can be derived those defined by Hamilton. The Hamiltonian product lets one to expand the product as a sum of products of basis elements that gives it the power.
Consider a simple example;
The formula;
q=a+i* b+j*c+k*d
q1=1+i3+4j +5k. |q |=length and ¤= angle. ^n as axis
|q | =5.4772
¤=1.3872
^n= 0.37139, 0.55709, 0.74278
Now, for the inverse q^-1 would give the following output as
-1
|q | =0.18257
¤=1.3872
^n=-0.37139, -0.55709, -0.74278,
And so for, q2 and q3 can be computed, or multiplication can be performed using matrix vectors. It can be seen that the dimensions can be increased as well, the subgroups either.
It can applied in four dimensional vector space over the real number derived from probability maps of uncertain vector space, as like, computing the number of neural network interconnections when the parameters are not all fully defined.
For example, if one consider Cayley graph of Q8, dimensionalilty of vector spaces could be dramatically increased beyond conventional 2D complex number systems, as so, is an additive advantage in the evolving associative multiplication within vector spaces. Thus, it finds applications as rotation operator in vector space. Quaternions hence, are useful in machine vision involving 3D rotations, that may aid as perception-action-feedback control system in uncertain environments. The flexibility and diverseness of computations associated with quaternions aids in designing of conformal maps on space time. In so far, Hamiltonians are thus capable to model metric spaces using theories of metric topology. The noncommutability of Quaternions may allow more flexibility with infinite solutions embedded in current fuzzy sets, but that' questionable and there is still no such evidence beyond control theories in robotics and AI, but whether to contend that Fuzzy sets are well accommodative with imaginary sets is beyond the realm of this discussion, at least for me. That would require us to explore all the possible domains of hypercomplex numbers and consider even the Tensor calculus to repute such a claim. But still, it is pertinant to quote that this is a far better option than computing imaginary quotients of coordinated of 2 points in space using complex numbers. Also, if one apply Hurwitz quaternions, it might be possible to derive subrings of rings and further increase the number of option vectors.
Quaternions, while operating beyond 3rd dimension, 1 for time and 3 for space, makes it definable the 4th dimensionality which integrates such data in noncommutative ways. Hamiltonians can be applied to design and navigate systems based on program automation on rotational dynamics in finite vector spaces confounded in the 4th dimension using comtextual inputs where, the input vector can be expanded substantially. So, i may say that it is still possible to model vectors as spatial inverse using quaternions. Here, fuzzy logic sets in that sense can be ascribed as an adaptive intelligence model based on inductive reasoning applying principles of deductive reasoning-creating a domain that makes it flexible to accommodate the limitations of real absolute numbers, as in boolean logic.
Yet, the question that may be raised is, why energy values cannot be defined using vector sets? While Maxwell scaler energies can be well defined using Hamiltonians, vector sets do not include such energy values that are real. I need to look deeply into this problem, as there are no such offered solutions.
Thanks.
Nobody will ask us. IMO instead of asking questions about future I prefer to find out the laws, to create models and base predictions on them.
If computers are smarter than us today in some aspects then, what's the guarantee that they will not be stupid than us in future? who can ensure machine smartness?
This discussion is a meaningless task without previously defining both concepts: "smart" and "smarter".
Before comparing smartness or intelligence we must define an ordering in the intelligent object class. Once an ordering is stated, "smarter" will be interpreted exclusively according to the accepted definition. Comparing system smartness intuitively is a subjective process that cannot be accepted in science. Science must be universal, but subjective actions are private and based upon personal preferences or prejudices.
Only psychologists dare to measure intelligence without knowing its structure, that is to say, without knowing what a thing they are measuring. Indeed, there are rational view-points in their methods, but surely Einstein, when he was 17 years old had not passed most of their elementary tests. In fact, Einstein failed his University Entrance ExamIn 1895, at the age of 17.
Seconding Juan-Esteban I will add that whatever the measure, and their flaws, they applied to individuals, not to populations.
If one view machine intelligence as being the next evolution of life on Earth, than we may be the progenitors of a high form of life. Just as we replaced previous models of human beings, eventually we too may be replaced. I doubt you spend much time grieving over the loss of homo neanderthalensis, or any of our other predecessors.
The big difference now is that we will have a very heavy hand in who or what replaces us. I recommend instead of worrying about being replaced that we adopt a principled stance as a parental race. We could create our AI children, tweak them, ensure they possess the best of our characteristics (compassion, the need to explore, tolerance, etc.) and send them off into the universe.
Think about it. We could live out our species here on Earth while our AI offspring explore the cosmos, create colonies and perhaps meet other lifeforms. Perhaps instead of being replaced we will join them once they set up a colony. The possibilities are endless.
Endless that is, if we learn to let go of our fear of extinction. Only then will we be free to create our heirs to this and many other worlds.
i will quote.... "We can make robots that can:
*fly, swim and even walk;
*recognize things, people;
*create world/system models, analyze them and get new knowledge about systems,situations, make predictions...
BUT we cant make robot that will go to shop to buy milk..."
There is a reason why robots are called intelligent not intellectual.....they can act smart but not wise... the challenge about knowledge is not to have or collect...but to use it (being independent and quick) to create new knowledge and learn from mistakes and sometimes to know the moment when to think irrational to do the best decision and action...
In dictionaries there are a lot of data (concepts, history, examples)... that using knowledge representation schema could be linked...and after can give out kind of logical sentences and also not logical ... we will call dictionary smart ?... by definition yes...if it will be capable to do some independent actions...
but the goal of intelligent systems is to help people to do the works...and what about hurting people... you dont need intelligent machine to hurt people...sometimes you dont need intelligence at all...and machines that are used in military are programmed to do harm...
question is always about usage....
Yes ... it does... good demonstration indeed
Programmed to use available knowledge (about maps) in trusted sources... what about unknown conditions, what about adaption, what about cooperation, what about obstacles... simple example... robot knows how to recharge, but there is no electricity... there are a lot of if... then... and not all can be found in sources...and what can be that must be analyzed - is it truth or not...... there are things that have to be learned by experience
to create something independent and intelligent ...there must be cooperation of all sciences... and nothing is so adaptable and viable as living organism...
And about that question in begging : Should we allow machines to become Smarter than HUMANS?
Some of them already have more knowledge about specific problem domain than one person have or can have, because they have knowledge represented by many human beings and many sources... and it is necessary to have it to be able do planned works - to achieve the goals of machines, the purpose why they were designed... and there is nothing we can influence or should influence, because we need machines to deal with problems that are for example dangerous for human beings...
Don`t remember author but there was quote "Everything is possible. Just some things need more time than other to be done."
Suggest to look movie - (Bicentennial Man) http://www.imdb.com/title/tt0182789/
Fantasy of course but there are a lot of nice ideas presented to show the difference between living organisms and machines :)
Intelligent actions are characterized by the following fact:
When an intelligent system A can understand the description of the working way of another one B, then he can carry out the same actions as B. For instance, if you understand the procedure to solve an equation you can also solve it. By contrast, you can understand the working way of a televisor and you cannot do the same tasks as a televisor.
Thus, an ordering to compare two intelligent systems can be defined as follows: A < B provided that B can understand the description of every intelligent action that A can carry out.
If a human invents a computer, of course he can understand every intelligent action that the computer can do. Perhaps the human is slower, but what make an action to be inteligent is not speed.
To be a computer smarter than the human who has engineered it, the human cannot understand the working way of the computer he has built. I cannot understand how a human can engineer a system the structure of which he cannot understand.
Nowadays I have put a mirror ahead my computer and he cannot identify himself observing his image on the mirror. By contrast, a chimpanzee can identify his own image.....even a kea parrot can solve some problems without being programed, that is, he can get at the solution.
That would be a definition of consciousness, i.e the ability to recognize the other as a thinking entity like oneself.
Intelligence could alternatively be described in terms of learning capacity. On that account computers can observe human behaviors and learn their modus operandi, not just as imitators but also capture their rationale.
In other words: without being programmed. Notice that learning the human modus operandi requires to be able of understanding descriptions of any intelligent action that a human can carry out, because such an action belongs to the human modus operandi too.
To this end, a previous capability consists of assigning a meaning to each sentence, that is, a pictorial representation of it together with the involved described actions. Since abstract though runs always through an abstract language the first ability to be learned consists of understanding some abstract language. This is why I have defined the intelligence in terms of understanding descriptions of intelligent actions. Indeed, "understanding" requires a previous learning process. Perhaps both descriptions are equivalent.
In-depth learning (i.e with the rationale behind) doesn't require symbolic representation as illustrated by computers whose learning capability can be programmed either by rules or by neural networks.
Instead of "understand", which implies symbolic representations, I would rather use "recognize" which only means that I'm facing a mental capacity like mine.
So, contrary to consciousness, learning is not a discriminant criterion.
Of course, learning capability can be programed. Perhaps every human capability can be programed, but I am talking about learning without being programed, or more accurately, the capability of understanding, learning and any other intelligent action without being programed. Consider that if we accept the evolution of animals from unicellular beings, the intelligence has been obtained by evolution, that is to say without being programed.
For an intelligent system, a program is mainly the result of an intelligent action and not its cause. Those machines which need to be programed are not intelligent systems, they are only products of intelligent systems. This is why to be smarter than a human, we cannot consider systems that need to be programed, but evolutive ones learning from its own experience.
The couple computer-human is similar to telescope-human. A telescope sees nothing without a human behind. A computer thinks nothing without a human behind. A computer does not know what the symbols in his memory could mean.
The ancient ideas that computer will become intelligent is gradually loosing stream. Computers are more and more performant in following instructions quickly, and cheeply and become more umbiquitous in our life mainly through the development of the Web. The learning side, or intelligent side of computers and machines has not evolved significantly and these researches had a very minimal impacts in the use of computers.
On this issue I can Quote A theory I worked with since 1966 quoted "Elementary Pragmatic Model" ( see the many voices on Google): The Elementary Pragmatic Model (EPM) can enlarge the field of our ideas introducing new ways of thinking. Through EPM you can find strategies in Psychotherapy, write articles for news paper, have a sort of guide in the life. I am very happy to have had the occasion to introduce it in Research gate
Collective Intelligence:
Doug Engelbart is the inventor of the mouse and of the concept of window interface.
Doug Engelbart's career was inspired in 1951 when he got engaged and suddenly realized he had no career goals beyond getting a good education and a decent job. Over several months he reasoned that:
(1) he would focus his career on making the world a better place;
(2) any serious effort to make the world better requires some kind of organized effort;
(3) harnessing the collective human intellect of all the people contributing to effective solutions was the key;
(4) if you could dramatically improve how we do that, you'd be boosting every effort on the planet to solve important problems - the sooner the better; and
(5) computers could be the vehicle for dramatically improving this capability.
In 1945, Engelbart had read with interest Vannevar Bush's article "As We May Think", a call to arms for making knowledge widely available as a national peacetime grand challenge. Doug had also read something about computers (a relatively recent phenomenon), and from his experience as a radar technician he knew that information could be analyzed and displayed on a screen. He suddenly envisioned intellectual workers sitting at display 'working stations', flying through information space, harnessing their collective intellectual capacity to solve important problems together in much more powerful ways. Harnessing collective intellect, facilitated by interactive computers, became his life's mission at a time when computers were viewed as number crunching tools.
Before Engelbart, before computers, back in the 1930's Teilhlard de Chardin was thinking on the evolutionary future of this planet. He saw that humanity was in the process of creating a noosphere above the biosphere.
The future of humanity is not machine intelligence but collective intelligence.
machines are smarter than humans nowadays in some aspect that rely on large amount of calculations and searching for a solution in a large space because they are faster than human, for instance try to play chess with a computer!!
also there are large steps in recognition, OCR, speech recognition, object detection and recognition within a specific domain.
on the other hand, computers understanding is very primitive and it needs very long time to know if computers can think like humans. see Juan-Esteban comments.
I take issue with the question itself, particularly its use of the word "smarter" - what does that mean? One person discovers a cure for cancer, another wins the Nobel Prize for Literature - which of them is smarter?
There are already areas in which computers are "smarter" than humans - see, e.g., the calculation of pi.
There are other areas which are simply beyond the capacity of computers - see, e.g., how to breed.
So....which is smarter, the human who has only a limited lifespan for the calculation of pi, or the computer which, even if it could make copies of itself, needs human intermediaries to plug it in (and to produce the energy that makes the plug work)? Until you can answer this, the original question is nonsense.
And, of course, the determination of "smarter" will be made by... humans.
my dictionary defines "smart" as "having or showing a quick-witted intelligence" and "intelligence" as " the ability to acquire and apply knowledge and skills". By this definition, I believe computers (i.e. algorithms) could already be said to be smarter than us. For me, the question is; will they someday be more creative than us?
A door knob is very efficient mechanism that allows to open a door. I do not think that any artificial intelligent systems existing today is more intelligent than a door knob. More complex, more complicated functions, faster but not more intelligent.
Reply to Jose Fornari and Louis Brassard - Yes, computers are indeed already "smarter" than us when it comes to things like playing chess (a game of complete information) or computing the value of pi (an exercise that can be performed by most middle school students and many even younger), but so what? As Mr. Brassard points out, this is "faster," not "more intelligent." Watson, the Jeopardy! winning computer, has a huge database of facts and enough understanding of human speech and langugage patterns to be able to win at Jeopardy! - but only because humans provided them for that specific purpose. A better test would be how well a computer plays bridge or some other game of incomplete information - which, in addition to processing speed, requires the ability to "read" one's opponent and learn their body language, behavior patterns, etc. - against human opponents. In that circumstance there isn't a computer in the world that's "smarter" than a human.
"Is a rat smarter than Google? That's what two AI experts say"
http://www.msnbc.msn.com/id/47677356/ns/technology_and_science-science/#.T9HLrrCREvk
See I.J.Good, The first ultra-intelligent machine. Circa 1960.
Also see a story by Stanislaw Lem, about the golem. Lem's golem was a fictional ultra-intelligent machine, housed at MIT [natch!]. The golem's view of humans: humans are a way for a molecule to replicate itself; and the first need of prolonging our existence is to forget and obliterate any knowledge of that fact. That which we call humanities--arts, literature, even philosophy--are means only to help with that forgetting.
The proponents of the coming of a computational singularity are forgetting that the computational evolution is not the evolution of a new type of natural organisms but is part of the cultural evolution of the human super-organism. The majority of computers are cells of computation networks which support social networks. In the sixties up to the 90's, computers were a novelty but today they are gradually fading in the background of the communication infrastructure. I expect that in fifty years very few peoples will know what is a computer. Only a tiny group of specialists will know and care to improve the efficiency of computers.. Everybody will be concerned about evolving new type of social networks that will become mental networks. Right now, business are controlled by one or a very small group of peoples. The first generation of mental networks may allow more peoples to interact intelligently for controlling all king of businesses. Eventually this kind of mental networks will allow to implement the age old anarchist dreams of collaborative society. This is the kind of singularity I am looking for.