The knowledge claim about the possibility of Artificial Super-Intelligence in the future raised several questions for us. is it a metaphysical possibility or a philosophical jargon? Can artificial intelligence surpass human intelligence- can A.I machines (which are functionally and behaviourally identical to human agent ) builds independently without the intervention of human intelligence (the A.I machines not only can work but also think like human beings)? Can there be a singularity in the field of artificial intelligence in the future? The fastest development in the field of A.I. within two decades makes us think about future prospects of A.I and the possible threats to humanity in the future. There are several ethical issues are concerned which should not be ignored.
If rationality is the criterion for the autonomy of the agency of an organism, as stated by Immanuel Kant, then can Artificial Intelligent machines qualify the criteria of rationality for the status of Autonomy which is applied to the human organism.
Preprint Kantian notion of freedom and Autonomy of Artificial Agency
I may be treating the question more seriously than you had in mind; it is a fun idea to consider. This said, some questions that first appear to be philosophical in nature turn out to be something else. This may be such a question. The notion, artificial superintelligence (ASI) has yet to acquire the conceptual attributes required to consider it as a hypothetical construct. Second, even if we were to treat ASI as a metaphysical construct, the criteria for acquiring such a label are not demanding. A mature construct similar to this makes it through the metaphysical gate if it is neither semantically meaningless nor logically impossible. Last, I don't think philosophical jargon is a polar opposite of metaphysical possibility as your question seems to imply.
A related question I find interesting is how would we know if an artificial intelligence was superior to ours (AKA super). This question has both conceptual and empirical dimensions. In my time spent on the topic, its initial appearance of simplicity gave way to a bunch of conceptual problems.
Again, I apologize if I took the question too seriously. The relations between machine and human intelligence are as fascinating as they are important to explore.
The problem in trying to answer super intelligence problems start with grounding it on abstract possibility.
The first thing to consider is how to define super intelligence. If you take Bostrom's definition of :
"any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest"[1]
While fancy, it does not help in any way as a measurable quantity. Terms like "greatly exceeds" or "performance of humans" or "domains of interest" serve to muddle the subject rather than illuminate on the definition of super intelligence.
I would rather focus on defining superintelligence given Yampolsky's AI completeness[2] where it puts NLP,problem solving, image understanding and knowledge representation and reasoning under that category [2]and measuring those problems under concrete metrics of algorithms to contemplate if the AI passes the test.
On whether AI can surpass our own intelligence, it will depend on getting people to agree on a definition better than Bostrom's (which is usually where they are), and testing the AI platform.
Part of the problem with ethical postulations is that most of them start under the assumption of Bostrom's paper clip scenario which relies on the orthogonality and instrumental thesis, which you can read my objections in :
https://www.researchgate.net/post/Is_Bostroms_orthogonality_and_instrumental_thesis_make_a_consistent_argument
Further, to me, the paper clip AI is not super intelligence, but just dumb AI (and these we have lots of, not in the future but today).
[1]Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies.
[2] Yampolskiy R. (2015). Artificial Superintelligence A Futuristic Approach.
Dear Manas,
Suppose that we are two hundred years ago at the time when humans were observing birds flying. Many like Leonardo were trying to find ways for human to fly. Leonardo did not succeeded but he was a the right track and fail not because he failed to understand the baisc principle of flying but on more technical ground in having light material for the wings.
But the field of artificial intelligence which originally was founded on the hypothesis that it was possible to create intelligent machines, intelligent in the sense humans are , have never either clarify what is meant by an AI nor even establish some foundamental principle of intelligence. So the philosophical bais of this field has never been clarified but it continued simply by redefining AI systems as advanced learning from experience automated systems. Forget about ''Super-Intelligence'' since the field has not the beginning of a clue about actual biological intelligence and does not even bother trying to find out. In the AI jargon that would simply make devices that learn to detect subtle usefull patterns in huge data streams.
Surpassing the humans? Very funny. We have always invented tools to surpass ourself without tools. And when we invent autonomous tool systems, we made them better than ourself otherwise it would be not economical and thus useless. A windmill surpass us. When Pascal made his mecanical addition machine, it surpassed us using paper and stylus, or us using only memory. So surpassing us has always been the basic minimal achievement of any of the successfull devices we invented. Notice that not a single of our device invented anything. Our device only do what we design them to do and they always do it better than ourself otherwise we put them in the junk. Since none of the device we created can do anything else than what we design them for then we should only be scare of our intentions in designing them or the intensions of those paying the engineers. No system can have an intention of its own. Only defective systems deviate from our intension but not with intentions of their own.
The ethical issues for AI ingineers are the same as all other engineers: Is is OK to design systems that have more chances to be used to the benefit of a few against the benefit of most humans. Answering this question is not necessarily straightforward. What is harder than answering that question is being ready to refuse participating in something very interesting and lucrative and carrer promoting but is un-ethical in the sense mentioned above. Most scientists and engineer prefer to not even bother with such question in simply reasoning that it is not their problem.
Regards,
- Louis
"Nous—intelligence, immediate awareness, intuition, intuitive intellect; (...) every intelligence is its own object, therefore the act of intellection always involves self-consciousness. (...) Nous is independent of body and thus immune from destruction—it is the unitary and divine element, or the spark of divine light (...) through which the ascent to the divine Sun is made possible." --Algis Uždavinys (The Golden Chain)
Can artificial intelligence surpass human intelligence? – Yes, in some aspects… For example, computers can consider more options and see a fuller picture of possibilities . Good AI can define patterns, thus reducing the size of complex questions to just a few patterns that human can investigate in great efficiency and reach a higher level of intelligence.
Does AI pose an ethical threat to humanity? – It does, by extending the human experience to new areas that existing ethical rules don't cover. Even areas that have ethical rules are at risk, since the scope and speed of AI that don't leave time for human correction, also the AI complexity intimidates people, make them question less and creates a barrier between them and the the people in charge.
If you're a monist materialist and you believe that what makes a person able to think is their physical composition, then the simple answer is yes, a computer built the right way would be capable of anything a person can do and more.
If you're a dualist and believe in the separation of body and mind, or body and soul, then the answer is much less certain because to imbue a machine with the same 'stuff' then you would perhaps first need to intimately understand the structure and composition of what you're imbibing it with.
Although on this latter point, Turing wrote "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates."
In my opinion this implies that Turing was of the perspective that he would not need to understand the soul in order for a machine to possess one.
Returning to your question of whether an artificial super intelligence could be functionally and behavioral 'identical' to human beings; on the functionality point, to function in an identical way to a person would surely mean that it was a person? Even people do not function in an identical manner to one another, twins that are entirely identical in every sense genetically will still have differences in thought derived from their personal experiences.
This raises the question of how similar something is required to be, before it can be considered to be within the same class. I'm afraid this is where science has strayed into dangerous ground in the past, for example with the question of whether a person is still a person if they do not possess a trait which would be considered a requirement for that classification.
Behaviourally however, if a machine was given the same or a greater capacity for measuring input (tongue, eyes etc); and if the machine was given the same capacity for output (a mouth, voice box etc), then the only hard limit on its ability to 'behave' in the same way as a person would be its capacity for taking that input and translating it into the same outputs.
The way in which the conversion occurs is where we return to the original two points, the dualist vs monist materialist perspectives. These points, combined with the question of functional similarity, are where most would base their argument against an artificial super intelligence.
In my personal opinion, the answer is that yes, an artificial super intelligence or singularity is capable of existing and would be capable of surpassing people in all ways.
For further reading on this topic, I would recommend Alan Turing, John Searle, Renes Descartes, Kevin Warwick and Ethem Alpaydin. Although there are many others and for a more complete perspective on your question there is a great deal of cross-diciplinary reading from the fields of Computer Science, Philosophy, Psychology and Sociology (among others).
Certainly it's a metaphysical question and challenge, challenge for human thinking and identity; must be considerated as one of most serious limits of what we are and could be in triple sens: philosophic, religious and political. Sometime a bigest danger gives also an issue, but never without conditions.
Dear Louis Brassard
If we look at the current development in A.I, isn't it indelicate that after 60 years the technological advancement in A.I will reach a point where it can replace human beings in every aspect? Once the A.I will reach the level of Artificial General Intelligence, then it is possible that there can be another kind of power struggle- (the power struggle between A.I and inefficient human beings.) As you have mentioned that in certain things A.I already reached the level in which it can perform better than human beings. What if A.I scientist developed Autonomous Artificial Intelligence (self-programming autonomous Intelligence machines) which no longer perform what we've designed to do.
------- " When we invent autonomous tool systems, we made them better than ourselves otherwise it would be not economical and thus useless.
what if there will be a conflict between human autonomy and artificial autonomy. ?
The ethical principles of survival of featest might go wrong for us
The notion of AI may itself depend upon the assumed definition of the intelligence itself, in general human intelligence. Thus, AI would possess at least similar (but not necessarily the same) properties as human intelligence. This leads to academic debate on AI either in terms of Weak AI and Strong AI. The former may simulate some human intellectual activities below the level of general human intelligence; the latter is similar or nearly equal to it. AI that may exceed the level of Strong AI is sometimes called superintelligence or a super-intelligent agent or an ultraintelligent machine. This leads to a possible conclusion that AI may emerge outside the human body as a super-intelligent agent, an autonomous machine especially when the possible emergence of AI is perceived in the virtual environment as a result of computation...
The key point about possible AI emergence and its impact on the future of human society and order of social arrangements is the question of how dangerous for humankind is existential risk that is tied with AI emancipation and possible supremacy of superintelligence.This breaking moment of technological turning point is usually called a singularity...
Article Why AI shall emerge in the one of possible worlds?
Why should metaphysical possibilities and philosophical jargon not intersect? I have a feeling they often do...
I may be treating the question more seriously than you had in mind; it is a fun idea to consider. This said, some questions that first appear to be philosophical in nature turn out to be something else. This may be such a question. The notion, artificial superintelligence (ASI) has yet to acquire the conceptual attributes required to consider it as a hypothetical construct. Second, even if we were to treat ASI as a metaphysical construct, the criteria for acquiring such a label are not demanding. A mature construct similar to this makes it through the metaphysical gate if it is neither semantically meaningless nor logically impossible. Last, I don't think philosophical jargon is a polar opposite of metaphysical possibility as your question seems to imply.
A related question I find interesting is how would we know if an artificial intelligence was superior to ours (AKA super). This question has both conceptual and empirical dimensions. In my time spent on the topic, its initial appearance of simplicity gave way to a bunch of conceptual problems.
Again, I apologize if I took the question too seriously. The relations between machine and human intelligence are as fascinating as they are important to explore.
It is simply an unhappy philosophical jargon. Higher mentality and intelligence pertain to persons, each of whom is a singular being, epistemic inaccessible from without and cannot be replaceable. In contrast, computers or what is called AI do not make something a person, let alone a singular one. Computers and their products can be replicated, whereas pieces of art cannot be replicated and so are other artistic works, which reflects human singularity. There are no two persons that their mind is identical, whereas there are software that are identical. Given that human intelligent is a mental achievement,metaphysically speaking, no machine would be intelligent at all, let alone super-human intelligent.
Humans are biological organisms. Observable behaviour of an organism can be described. Any description of the observable behaviour of an organism that can be given by means of a finite number of exact instructions, expressible in finitely many terms, can be thought of as an effectively computable procedure. Provided that the result can be reached in a finite number of steps and provided that the procedure can in principle be carried out by such an organism, the only condition that remains is that the organism (the human being) can carry the effectively computable procedure out without insight or ingenuity.
So, any human whose task can be represented as the result of an algorithm can be replaced by a computer. Simplifying a bit, since we have all possible computers, super intelligence seems to be a bit rhetorical. All that we really do is increase the number of tasks for humans that are replaced by means of a technology that is understood to an extreme degree (mathematical as well as physical).
On the other hand, the extensional equivalence between externalized behaviour of an organism and some set of well-formed formulas generated by an effectively computable procedure tells us exactly zero about human intelligence. What the actual processes are that give rise to the observable behaviour of an organism seems – in comparison to computers – an open question.
So, yea, we should be concerned about the consequences from the technological development, but we should also keep in mind that we are dealing with technology when talking about AI, not with an organism.
"Everything has its own identity, which is unsurpassable in the whole universe." --Kodo Sawaki Roshi
And everything includes a computer, a mouse, . . .
Everything has an access to the Intellect; some things are better than others at accessing the Intellect but even a grain of sand is not totally unintelligent.
Transhumanism and artificial intelligence is a postmodern metaphysics that deconstructs the traditional metaphysics
Robert Tucker
A.S.I is based on the logical prediction from the remarkable development in the field of A.I., Once upon a time, Self-driving cars, Andro-humanoid robots, etc were not even considered as a metaphysical possibility but within few decades that's become the reality. Therefore, the possibility of A.S.I can't be ruled out. There are several possible challenges for human existence because of the revolutionary development in A.I. it will be the biggest mistake if we ignore the possible threats from A.S.I.
The evolutionary process itself indicates that there is no single species for being an eternal ruler in the earth. it's the unique feature of human intelligence to predicts the possible threats and work against those threats which makes us the most dominant player in the earth for a longer time, unlike other species.
Dear Sven Beecken,
"On the other hand, the extensional equivalence between externalized behaviour of an organism and some set of well-formed formulas generated by an effectively computable procedure tells us exactly zero about human intelligence. What the actual processes are that give rise to the observable behaviour of an organism seems – in comparison to computers – an open question."
I agree with you that the replication of the behavior and activities of an organism through algorithmic process or other possible programming way into the A.I system has some limitations.
Amihud Gilead
"It is simply an unhappy philosophical jargon. Higher mentality and intelligence pertain to persons, each of whom is a singular being, epistemic inaccessible from without and cannot be replaceable. In contrast, computers or what is called AI do not make something a person, let alone a singular one. Computers and their products can be replicated, whereas pieces of art cannot be replicated and so are other artistic works, which reflects human singularity. There are no two persons that their mind is identical, whereas there are software that are identical. Given that human intelligent is a mental achievement,metaphysically speaking, no machine would be intelligent at all, let alone super-human intelligent."
There is a difference between human intelligence and Artificial intelligence, The A.I agents might not have some feature of human intelligence (like as you mentioned Arts, mental states the way in which human intelligence shared through the intersubjectivity. However, it does not rule out the possibility of A.S.I.
Manas kumar Sahu ,
Your response to me seems unrelated to anything I said. I would, however, point out that extensions of what AI might mean in the future are not "logical predictions" as you say. They represent conceptual extensions based on empirical generalizations (AKA principles).
My point -- to restate in brief form -- is that we are tossing around an ungrounded construct sans any distinguishing criteria. In other words, we are engaged in idle chat and not rigorous philosophy. Hope that helps.
Manas kumar Sahu , have you seen this thread related to your research question.
https://www.quora.com/What-are-the-various-philosophical-metaphysical-and-epistemological-issues-surrounding-artificial-intelligence-AI
ASI is a modern metaphysics in the level of science and technology, which deconstructs traditional Cartesian and Kantian metaphysics founded on the grounds of unmediated human reflection.
Dear readers,
The expression ''Artificial Intelligence'' is an oxymoron in the first place because it assume that what is automatic/machine-like can be ''intelligent'' while it is exactly the opposite. It I follow exactly a procedure I do not use my intelligence. I only use my intelligence when I don'nt only do that. Coming up with a set of procedure is intelligent since it is not limited at following procedure. Engineering needs intelligence, i.e. building machines, creating all kind of industrial process need intelligence, and creating new AI tools needs intelligence but none of the deviced being so constructed have any intelligence. They worked exactly as we design them. They operated exactly as we intented them to operated. They do not departed from their design. And if they would depart, we would consider them as broken , mal-functioning and would throw them in the garbage. Intelligence cannot be build into any machine. So the expression Artificial Intelligence is a contradiction in terms, an oxymoron.
Regards,
- Louis
Yes indeed, Louis, such is my view, too. Of course, we may compute automatically various things, but such a computation does not take part in our intelligence. All that machine can do cannot be considered as intelligence, memory, or any of our cognitive properties.
Those who assert that AI is an oxymoron beg the conceptual and empirical questions in circular reasoning. Defining the construct of 'intelligence' as a set of attributes exclusively applied to the actions of humans (or some larger range of carbon-based biological life forms) contributes nothing to the discussion because it fails to address what there is to mean by certain non-carbon-based behaviors that are obviously intelligent in an objective sense. A good place to start to understand this issue is with Alan Turing's original test, then progressing to John Searle's Chinese Room Puzzle, and from there to current views. It can quickly be seen that this is a deeply challenging topic, one not addressed by artificial restrictions on the range of application. Among the reasons philosophers, psychologists, and neuroscientists are working on this issue is the fact that one day in the not-too-distant future we must come to terms with the rights and moral responsibilities accorded to non-biological intelligent life forms, most likely arising in quantum computing networks. None of this, by the way, requires adding the term "super" the discussion.
Dear Robert,
I do not restrick ''intelligence'' to humans. I think it is true that humans are intelligent in unique ways that no other animals on this planet have. But I would accept to qualify the way tiger hunt as intelligent. I find my dog very intelligent in his own way and he sometime trick me and I often try to trick him and fail, he sees me coming. And when we look at the way all natural living entity operate, not only they operate with some intelligence but there is a great intelligence in Nature in order to create everything. But it took Nature billion and billion of years of perfection of intelligence. While us, we do not try to create intelligence. We have no clue what it even mean. We only try to build unintelligent devices, like our ancestor creating stone tools. Now we are only a bit advance in creating tools that are autonomous and AI is the domain of enginnering trying to increase this level of autonomy.
Nature is intelligent from the begining since Nature has always created itself from its own intelligence and has provided a tiny bit of it to us. But being able to use our intelligence at creating automated device is not at trying to create intelligence. And expert in AI are not the one well place to know how life works since they are not biologists. And biologists can'nt even figure out how the simplest cells works except only particular aspect of it. Trying to figure out how a worm operate is at an astronomical distance, and figure it out what a primate operate is mega-astronomical away. We did not even scratch the surface of all that. But Nature never try to design tools or machine as we do. Each living creature is a self-creating machine, this is true of any living entity. And we have no clue how this is possible in none of the case. Nature is not in the business of machine creation but in the business of self-creation of self-creating machines. This has nothing in common with what AI expert try to do: they try to create advance autonomous machine, or technique for creating such machine.
I completly agree with Turing way to determine if an agent is intelligent and I am completly 100% that no machine which is not intrinsicall self-creating will ever pass the test.
Regards,
- Louis
Louis,
our intelligent behavior, as well as that of tigers and dogs, depends on the possibility of encoding strategies into genes (and in our case, inheriting learned intelligent strategies socially and culturally). The very possibility of there being such strategies is that they prescribe certain behaviors (or cognitive outputs) dependent on external conditions. „Natural“ intelligence is certainly much more sophisticated than current (or soon-ish) artificial intelligence, but that does not mean that it is in principle of an entirely different kind; indeed, in providing such strategies, it fundamentally works in the same way: by matching a range of outputs to common inputs.
Best,
Joachim
Joachim,
Of course, human intelligence is of quite a different kind of the computing capability of any machine. The principle different is that each human subject is singular and no human subject can have a duplicate. In contrast, each machine, each hardware and software, can be duplicated. This is only one major different, but it is more than enough to show that your argument is not sound. No machine can be considered intelligent, at most it can be considered sophisticated. All this holds true for the current machines as well as for the future ones. The difference is metaphysical and does not depend on any scientific development or progress. We are not machines and no machine will be human.
Dear Joachim,
Even an eukaryotic cell inherites 3 billion years of life evolution, us we inherited 3.5 billion and hundred of thousand of human cultural evolution. But life forms are not machines, simply semi-autonomous tools constructed from the outside for some outside purposes. Life forms is self-creating interactions and in the case of complex animals colonies of thousand of billion of eukaryothic cells acting as one self-creating interaction. The only machines, semi-autonomous tools that exist on this planets are those that humans built. Nothing else in Nature is a machine, it can only exist throw our making. All living organism have by their physiology ranges of stabilized behavior possibilities, and we built our machines in an analog way. When you walk in an easy and flat surface, you enter a regime of repeating leg movement. We do build machine with articulated legs that can walk in a fully automated way on a limited variety of terrains. And any of our behavior is in most part in autonomous mode, done unconsciously. So the analogy with the machine we build only hold at this level. The difference is how living forms comes to that and how they are not limited to that. They are intrinsically purposefull, all about self-creating interaction, gigantic societies of interaction creating themself within amougous network of other living beings. In short they are borned of the Univers, are it, and consciously so , all in their own ways. I thus comeback to Kant: the living are natural purpose while our machine have no purpose; We build them as part of our purpose.
Regards,
- Louis
Dear Michael,
''Consider a case where human built machines were programmed to evolve intelligence, and thus gain the ability to break their original rules within certain limits.''
If I would grant that this case could existed then yes AI could exist. The problem is this is not possible machines that would be more than what we design them to be. There is no short cut for creating self-creating living beings. It takes a very special solar formating system, and a special planet and then a first cell appear and conscious world appear and it go on from there.
Regards,
= Louis
Amihud,
of course it is true that each human subject is singular, it simply has nothing to do with what I wrote. Why should the fact that X is intelligent have to depend on there being no duplicate of X? In fact, we would assert the opposite: if X is intelligent, then so is each duplicate of X.
Dear Louis,
what you write only reinforces your basic view that intelligence can only be „natural“. That’s fair enough, but that very view was what Robert questioned in the first place („Defining the construct of 'intelligence' as a set of attributes exclusively applied to the actions of humans (or some larger range of carbon-based biological life forms) contributes nothing to the discussion because it fails to address what there is to mean by certain non-carbon-based behaviors that are obviously intelligent in an objective sense“), so your reply adds nothing to this stalemate between fundamental opposing views.
What I wrote, in turn, pointed out similarities between „natural“ and „artificial“ intelligence when seems much more integral to an objective notion of intelligence than whether the intelligent object is the result of evolution or technology (or, to refer to Amihud‘s comment, singular or duplicable).
Artificial Intelligence is in part just a programmed tool, yet and this is the new thing - a part that can generate rules, which opens the interesting possibility of developing autonomously to a wiser thing than its human maker.
Note that computers have an exceedingly ability to access & process very large data sources, which magnifies the power of any new found rule. It is not only our brains but our entire senses that are affected by AI. We already are using sensors that are beyond human physical reach (in cellulars, for example). One can see the progress of communication, computer power, sensors, nanotechnology, and AI in our own eyes - just realizing some changes that we experienced in the last 20 yrs. This should be expected to further grow and produce more and more intelligent systems, maybe not our kind of multi-functional intelligence, but superior to ours in a way, the kind that dictates our reality (like the internet as done) and it is just the beginning, we even have not started yet to discuss Virtual Reality ...
Is it possible for an intelligent entity to create a non-intelligent artifact?
Interesting thought, and still with an unexpected result.
Super-intellect is created by man through mathematical algorithms. Naturally, when the database becomes more than our consciousness, it looks amazing. Even at times it is pretty trusting that the yak is frightening at times. But that is not how it all depends on us creators, how much we trust the presented results of artificial intelligence. Of course, not all tasks can be solved with artificial intelligence because people are different from each other and it can always be that a solution cannot be solved.
Only if a new algorithm is created without human intervention, and then a super intellect can then be surprised.
Dear Joachim.
""contributes nothing to the discussion because it fails to address what there is to mean by certain non-carbon-based behaviors that are obviously intelligent in an objective sense“),""
I did not fail to address the difference between what is alive and a machine. I ascribed intelligence to all that is alive, as being part of any self-creating interaction. I excluded our machines from this definition, this is trivially demonstrated. Now I did not elaborated why the most primitive cells and all other life forms were not self-creating machines themself and that all life is just biological machineries.
Prior I procedded, notice that even if I would grant this point that all life is the evolution of carbon based machenies, my point would remain that AI is impossible since intelligence need to grow with the very process of evolution on the planet. It comes only throw it.
I take as a premice that nothing that exist is perfectly stabilized , nor similar to anything else. I do accept that any scientific concept is an idealization of a certain quasi regular aspect of the Universe. But I do not assume that any such perfectly order exist in Nature. We know it is the case that no human being is like another one. I extend this to all that exist. This is my first metaphysical assumption.
My second assumption is all quasi-stable entities can only remain so if they have also self-creative capacity to return to stability when interacting. Without this self-creating capacity would not allowed stability.
I take biological self-creating interaction, biological life to have this self-creating capacity which by its very definition is not machine-like, not amenable to definition which would assumed to be a perfect regularity which assumption 1 reject.
Consciousness is thus by definition correspond to what is actually opposite to what is machine-like, what create what is machine like. This is very similar to Shrodinger's notion of what is consciousness except that I extend it to all what is alive (biological or not):
''The ensuing organic development begins to be accompanied by consciousness only inasmuch as there are organs that gradually take up interaction with the environment, adapt their functions to the changes in the situation, are influenced, undergo practice, are in special ways modified by the surroundings. We higher vertebrates possess such an organ mainly in our nervous system. Therefore consciousness is associated with those of its functions that adapt themselves by what we call experience to a changing environment. The nervous system is the place where our species is still engaged in phylogenetic transformation; metaphorically speaking it is the “vegetation top” (Vegetationsspitze) of our stem. I would summarize my general hypothesis thus: consciousness is associated with the learning of the living substance; its knowing how (Konnen ¨ ) is unconscious.,''
“ Mind and matter” (1958).
Learning means what is not there before, created anew. Creation=consciousness
Regards,
- Louis
A problem that we seem to have in this discussion is thinking about intelligence in a way that is not conflated with the biological intelligence with which we are familiar. This kind of challenge is to be expected because we are, after all, using our intelligence to contemplate what other forms of intelligence would look like.
Nonetheless, the discussion cannot progress beyond offering personal opinions unless we establish criteria for the notion of intelligence that are not logically or empirically dependent on biological forms of intelligence. For those interested in that discussion, a good place to start is by one of the most universal definitions of intelligence as adaptability to the environment. Using this simple definition, we see that rocks are not intelligence, plants are more so, animals even more, and humans the most -- given what we know which in the larger scheme of things, is not very much. As I write this. we have self-evolving AI systems that rewrite their own source code as they get smarter. They do this as an adaptive function.
So, I'll start the list with this:
Tentative First Criteria for Intelligent Agent
1. Adapts to its environment (e.g., evolves to accommodate previously unaccommodating patterns of change in inputs).
2. Learns (i.e., can solve problems of increasing complexity, scope, and type in response to experience.
3. Anticipates based on historical pattern recognition.
Perhaps others can add to or amend this tentative start.
Dear Tucker,
''universal definitions of intelligence as adaptability to the environment.''
Human type of intelligence is more about changing our own environment than adapting to the environment. We are more niche building than niche adaptor. We live more and more in purely artificial environment and most of our adaptation effort is to conform to each other norms , and people of power are much more about forcing other to adapth to their norms than them to adapt to other.
Take artistic creativity. It is not obviously about adaptivity. It is more about creating new occasion of developing experience whose enjoyment correspond to the growth such experience provide us. So here , a better definition of intelligence, not one limited to engineering creation, or scientific creation, but all kind of creation: is intelligence correspond to the creating capacity of collectively (because we are essentially social creature) enjoyable niche of experience.
Regards,
- Louis
Dear Louis,
I certainly do not claim that there are no differences between organisms and machines (that is rather trivial, I think). Accordingly, I do not claim that organisms are machines. What I suggest is that the differences between organisms and machines need not make a difference for attributing intelligence (at least I have seen no compelling reasons that they do).
I fully agree with Robert that "the discussion cannot progress beyond offering personal opinions unless we establish criteria for the notion of intelligence that are not logically or empirically dependent on biological forms of intelligence".
Regards,
Joachim
Dear Joachim,
Is'nt the essence of such discussion a process of sharing opinions and showing to each other the weakness or problems of these opinions in such a way that some participants will learn something new or unlearn what he thought was true. In essence such a discussion is in itself a collective process of moving beyond personal opinion. So I all with you for moving beyond personal opinion but what do you sudgest to do more (or less)vthan what we do already.
I would say that it is impossible to offer more than one personal opinion in a discussion. Unless someone will offer somebody else opinion which he does not share. This is usefull sometime. Criticizing somebody else personal opinion is itself also a personal opinion on somebody else personal opinion. Answering to critic is a personal opinion on somebody else personal opinion. But all this dialectical process in sharing personal opinions is the only way to transform personal opinions in collective ways, so in ways that goes beyond personal opinions.
There are a bunch of popular opinions of what intelligence is in dictionaries or in the litterature of different branches of human knowledge. All of us entering this discussion read and partially agree with some of them. So each of us already a personal opinion that reflect partially a certain existing collective concensus but each of those are very partial. A discussion is a way to bring this together.
What is often used as a working definition of ''intelligence'' in AI , were it is currently used to qualified the latest of the techno gadget as ''intelligent'', intelligent device, smart phone, should be reject for the purpose of this discussion since it inverts in Orwellian way the original concensual meaning of the word intelligence in order to sale the latest techno crap.
We can trying to capture in this discussion a general enough notion of intelligence that remain as open as possible to the possibility that artificial machines of our making could be intelligent but this general notion has to be found in the only domain we know the word intelligence applied: human and and living domain. Then only then could we explore if it is possible for a ''machine'' to be intelligent.
On the other hand, I agree with Turing that human intelligence has the capacity to assess the existence of intelligence in other human and that it could harness to assess intelligence of any artificial agent. Turing limited this to conversation but humans assess intelligence of other animals on a current basis without conversation. We are natural mind reader from behavior so we naturally can extend this capacity to artificial agent. Anyone haing experiencing these automated phone services very quickly know there is no intelligence in such hellish interraction on the system side. We very very quickly discern that whatever systems we interact with if there is some hint or not of intelligence on the other side. On my part, I never got a hint of the presence of intelligence in any techno gadjet.
Regards,
- Louis
Regards,
- Louis
Dear Louis,
in my opinion (!), opinions are useless in debate (especially in public, anonymous debates like this one). All that counts are arguments and reasons! If they are good enough, they might change my „opinion“.
Best,
Joachim
Dear Joachim,
Your last argument use a strange dichotomy : opinion versus argument.
There is nothing you can offer me as an argument that I cannot offer another argument to invalidated it. You see, we are not into a mathematical proofing business here. Whatever word you use has different meanings. No no such dichotomy exist. Arguments are opinion. You simply has to enlarge your notion of what an opinion is. Anyway, this is your opinion and this is mine.
Regards,
- Louis
Dear Louis,
indeed, you should not just offer any argument, but a good one (if it were just about any argument, you could probably deny most modern scientific beliefs by offering a literal interpretation of the Bible). If you are rational, you will strive to form your opinion in such a way that it is consistent with the best reasons available to you. It goes without saying that many opinions are held on the grounds of bad reasons that would never hold up in an attempt to produce a sound argument.
I am here to exchange reasons, not opinions. If you are here merely to exchange opinions, that’s fine, of course. However, then I wonder why you seem to be arguing on RG with views different from your own, for arguing presupposes arguments, not merely opinions. You don’t need mathematical rigor for that, a notion of „either (or both) of us is wrong“ suffices.
Best,
Joachim
Dear Joachim,
J:''you should not just offer any argument, but a good one'' . I also share this opinion. The adjective ''good'' here is significant. An opinion may be better than another and this judgement will have to made by each of us. The art of discussion is making an argument so simple that it is very hard to make another against it. At this point one has succeeded in almost removing personal opinion. The use of analogies can sometime provide such simple argument.
J:'' It goes without saying that many opinions are held on the grounds of bad reasons that would never hold up in an attempt to produce a sound argument.''
This is the point of any rational discussion to point out these problems in each other arguments. Sometime someone will come up with a self-contradicting argument. In such a case it is easy. I pointed out that ''artificial intelligence'' is such contradiction of terms. I am still wainting for an argument against this argument.
J:''I am here to exchange reasons, not opinions. ''
I do not accept your above opinion or bad argument. You make a dubious dichotomy between reason and opinions. I think that we simply uses these words differently but do not really differ of opinion if this would be clarified.
J:''. If you are here merely to exchange opinions, that’s fine, of course. ''
This seem to indicate that I am right in my above assesment. Yes I am here simply to exchange opinions but for me saying this do not mean what it means for you.
J:'' However, then I wonder why you seem to be arguing on RG with views different from your own,''
What would I be arguying with views that do not differ from my own? This arguent was an easy one.
Regards,
- Louis
Dear Louis,
that was a whole slew of bad arguments at once!
L: „An opinion may be better than another and this judgement will have to made by each of us“ - Indeed we have to make this judgment, but the standards by which this judgment is correct cannot be set by each of us. What makes judgments like „Louis Brassard does not exist“ correct are the facts referred to (such as that Louis Brassard exists).
L: „I pointed out that ''artificial intelligence'' is such contradiction of terms. I am still wainting for an argument against this argument“ - if you simply define intelligence as being natural, then no one can change your mind that it can also be artificial. If I defined „Louis Brassard“ as „the person who does not exist“, who could prove me wrong? Again, I defer to Robert on this point.
J:''. If you are here merely to exchange opinions, that’s fine, of course.“ L: „This seem to indicate that I am right in my above assesment. Yes I am here simply to exchange opinions but for me saying this do not mean what it means for you.“ - it‘s fine as long as you don‘t argue. I had already spelled this out above: Once you start an argument, you cannot rely on stating opinions anymore, you‘ll have to provide reasons (or, well, physical threats or bribes might work too, depending on circumstances).
J:'' However, then I wonder why you seem to be arguing on RG with views different from your own,'' L: “What would I be arguying with views that do not differ from my own? This arguent was an easy one.“ - No, what you offered was merely a pun and a (deliberate?) misconstrual of what I wrote. (I guess that’s what passes as an „argument“ on Twitter, or in politics.) I said that if you argue, you need to provide reasons which are good independent of opinion.
Best,
Joachim
Dear Joachim,
''that was a whole slew of bad arguments at once!''
Lets now see how you support this opinion.
J:'' but the standards by which this judgment is correct cannot be set by each of us. What makes judgments like „Louis Brassard does not exist“ correct are the facts referred to (such as that Louis Brassard exists).''
You seem to confuse a judgement and an argument which support a judement. A judgement process comes before one support it and may or many not end with a process by which one is able to support it with an argument. Yes one is not totally free on how to make an argument but it is not a universal norm. In mathematics it is. In natural science, the norm are also very well establish but when we come to psychology or philosophy here the norms will vary with different schools of thought but no norm apart the most trivial among all philosoper. You are I think an analytic philosopher, the air splitting school which is probably the most strick about norm but you pay this with a tunnel vision; a practice of philosophy trying to stay as close as possible to the scientific practices. The price to pay is to avoid whatever topic reducible to such practice and from there the tunnel vision. Just a personal opinion.
J:'' if you simply define intelligence as being natural, then no one can change your mind that it can also be artificial. ''
I do not make this premise in the argument. The argument is based on the notion of a machine as something doing what we specify it of doing. The second assumption is that a fixed behavior, is not an intelligent behavior. From there, a machine can'nt be intelligent or have intelligent behavior. So machine intelligence is an oxymoron expression. To counter this argument you have to counter one or both premices.
J:''Once you start an argument, you cannot rely on stating opinions anymore, you‘ll have to provide reasons (or, well, physical threats or bribes might work too, depending on circumstances).''
Against another dubious dichotomy. The reasons used in an argument cannot be a personal opinion. This is your personal opinion and not mine. Ideally if one can make such argument were they are so simple that we are forced to agree with them then it is exactly this ''ideal''. Unfortunatly it is not usually possible although the process of a discussion should try to bring us closer to this ideal situation.
Sorry Joachim , none of the above arguments were remotly convincing in my opinion to support your opinion.
Regards,
- Louis
Dear Louis,
again, you misconstrued what I wrote. I did not write that „The reasons used in an argument cannot be a personal opinion“, but that in a discussion it does not matter what your opinion is; what matters are the reasons you provide. Obviously, if you provide the best possible reason, then your opinion should cohere with it.
The point is that what can be evaluated objectively are reasons; opinions are only rational if they are consistent with or supported by the best reasons you have.
Regarding the „tunnel vision“ - that is your personal opinion, yes, so I will not comment on it.
However, this has gone off topic long enough. What I initially wrote - before all this - is that our behavior, insofar as it has been acquired intelligently over the course of evolution and social learning, is to a degree rigid as well. We can choose between viable strategies, but certainly not freely. The same would be true of an ideal machine.
Further, such strategies are not content-independent - what matters cannot only be that they are not fixed, but even more so that they are adequate with regards to the context. Running away from a hungry lion will (evolutionarily) always be more intelligent than running towards it, no matter whether either behavior was determined „freely“ or „fixed“. So, implementing an open-ended list of intelligent behaviors would constitute one version of an intelligent (if rigid) machine.
I suspect what you are looking for is creativity, not merely intelligence; and it is highly dubious to what degree either of us is capable of „free“ creativity, in the same way in which either of us is capable of „free will“... We are subjects to the limits of our senses, our cognitive capacities, our imagination, our surroundings... just as the potential intelligence of machines will be subject to constraints of their programming and learning algorithms.
Best,
Joachim
Humans and Machines
With their material are different
Hence can mimic on some dimension or even many or all but natural or artificial can never be same. Atleast it's not a jargon or nor even possibility but if even lies somewhere in between at least if we even say impossible,impossible on its own says I am Possible
I do not intend disrespect to anyone but this discussion seems to be going off the rails. I would make a few points that address what appear to be significant misunderstandings.
- A parrot who utters E=MC^2 is not doing science. We base the merit of a philosophical or a scientific proposition not on whether it happens to be correct (empirically, conceptually, logically, definitionally, conventionally, etc.) but on the rational processes by which it was derived. This single point vitiates a number of claims I see in this discussion.
- Question begging and the construction of tautologies do not solve philosophical problems.
- A discussion on this general topic must allow for the possibility of intelligence as yet undiscovered and of a structure and function unknown to us; otherwise, it is dogma more resembling religion than philosophy or science.
- Invoking a corrupted form of Wittgenstein's private language argument puts an end to the discussion.
- Parsing the original question doesn't get us very far (I attempted to respond to what might have been the implied question. The "super" in AI is "super"fluous and the construction "metaphysical possibility" is, as I said initially, nearly meaningless. Finally, I have no idea what philosophical jargon means. 'Jargon' is not a pejorative term but it seems to be employed as such in the original question.
Is it time to move on to another question and thread?
Non-intelligence is a (meta)physical impossibility.
What is artificial is necessarily also natural.
Is a computer intelligent? Of course, and it does not need to be switched on.
If we focus on domain-specific tasks (like chess playing or mathematical computing), we already have ample evidence of artificial intelligence that surpasses our own -- often vastly so. The interesting question is whether we will ever witness an artificial intelligence that surpasses us in all respects. Certainly, the presence of domain-specific cases gives us good inductive grounds to surmise that even more impressive artificial performances are on their way. From a metaphysical standpoint, I see no armchair reason to rule out this possibility.
There might, however, be epistemological and/or embodied reasons to doubt that superintelligence will one day be a thing...
The works of Nick Bostrom annoy me a bit, so you might consider for a change the arguments by Hubert Dreyfus (https://mitpress.mit.edu/books/what-computers-still-cant-do) and Selmer Bringsjord (https://www.springer.com/gp/book/9780792316626)
You mention Kant, so for my own thoughts on the ethical implications of robotic autonomy, see
Article The Mandatory Ontology of Robot Responsibility
Article Bridging the Responsibility Gap in Automated Warfare
In any event, this is a pressing issue -- that merits pressing attention.
As to the question " Can there be a singularity in the field of artificial intelligence in the future? "
Please see the arguments on the link posted by Beemnet Mengesha Kassahun and my arguments in:
https://www.researchgate.net/post/What_is_the_main_contradiction_of_the_theory_of_technological_singularity
Artificial intelligence is similar to mathematical faculty with universal power of logic but without differences deeply rooted in history and culture. Methaphysics commes to us throught all that historical and cultural legacy having created who we are. There no real meaning of methaphysics without hermeneutics; without knowledge about whole European tradition, Greek, Jewish and Christian.
Artificial intelligence (AI) and robotics are digital technologies that will have a significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.
CLOSINGS:
It is remarkable how imagination or "vision" has played a central role since the very beginning of the discipline (AI) at the "Darthmouth Summer Research Project" (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans "AI is impossible" (Dreyfus 1972) and "AI is just automation" (Lighthill 1973) to "AI will solve all problems" (Kurzweil 1999) and "AI may kill us all" (Bostrom 2014). This created media attention and public relations efforts but it also raises the problem of how much of this "philosophy and ethics of AI" is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social development closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.
(Taken from the Standord Encyclopedia of Philosophy plato.stanford.edu Ethics of Artifical Intelligence and Robotics)
If the concept of building artificial super intelligence takes into account the issue of artificial consciousness, at present, as long as there is no technical possibility to build this type of technology in the full sense of the word, this issue can be described as both a metaphysical potential possibility of building this type of technology in the future and can also be now called a jargon philosophical depending on the selection of key attributes and interpretation of this issue in interdisciplinary connection with other fields of knowledge.
Best regards,
Dariusz Prokopowicz
Human being has in the same time diachronic and synchronic dimensions. It's not only rationality, it's general understanding of existence. Artificial intelligence doesn't understand it.