I'm particularly looking for papers which debate the possibility vs. impossibility of "fault tolerant reasoning" in strong AI i.e. the ways a sufficiently advanced AI would deal with logical or more generally epistemological contradictions.
To my best knowledge, the term ‘strong artificial intelligence’ has and is used in multiple meanings.
1. Originally, this term was proposed (by John Searle) to describe the efforts done to develop a sufficiently comprehensive and effective computational theory of mind (as well as the computational methodology).
2. The term was also adopted by many researchers to refer to the research efforts towards artificial general intelligence, which aims at computational systems that reproduce the overall intelligence of human individuals (but not extending to collective human intelligence).
3. Many researchers have also used the term to describe a combination of computer/system self-capabilities to build awareness, managing contexts, reasoning purposefully, and develop operational strategies in real time and changing contexts as humans do.
Philosophically these are different. Which aspect(s) are you specifically interested in?
Thanks very much for the conceptual elucidation of the term AI. I find myself more interested in your third formulation of AI, specially the "self-capabilities to build awareness, managing contexts, reasoning purposefully". I'm wondering if a sufficiently advanced AI could "reason" self-sufficiently i.e without being assisted by human intervention.
You artfully put into words what I've been obsessed with: "fault tolerant logical reasoning". Prima facie, I tend to find such a capability in an AI machine suspicious, and I'm wondering how, logically speaking, it could be achieved. When faced with a logical contradiction, a human being can always resort to his/her store of illogical feelings and instinctual drives. For instance, if the act of living on/surviving were to be proven logically incoherent ( by some existential philosopher), people would not simply begin to slit their wrists afterwards; a human being could always come up with an illogical answer--and this is what mostly people do in social life--: "OK! I accept that my life is absurd and logic dictates that I kill myself, but to hell with logic! I love my life and can live on without logic just as my ancestors have lived so for thousands of years." More tangibly, the same argument goes for people who maintain their belief in God even after being convinced that their belief is illogical--"Credo quia absurdum est", " I believe because it is absurd" as Tertullian proclaimed.
In other words, human reasoning is "fault tolerant" because it is predicated on a pre-logical, instinctual volitional component which has evolved over millennia to sustain and further the organism's survival. Logic is a later-comer in this evolutionary process. So, I'm wondering how exactly would an advanced AI react to such logical contradictions--Godel's Incompleteness theorem, Agrippa's Trilemma etc.--Would it simply sidestep the question? Would it choose to go into a meditative trans like a Buddha? would it try to wriggle out of such mortal questions through sophistry? would it consider such question as irrelevant?! would it "explode"?
It is intriguing how the logical self-sufficiently of AI machines is thoughtlessly overrated in science-fiction; why HAL 9000 would want to takeover the crew? based on what line of logical reasoning would it autonomously desire to do so? For wanting/desiring is inextricably bound with human's having a biological component--an involuntary drive to survive. It is difficult to understand in what sense of the world an AI machine would "want" or "desire" anything at all--unless it is already programmed/rigged to desire!
I had already formulated my concern about strong AI in another question in terms of yet another problematic topic in Al, Intelligence, as follows. I think it summarizes what I've been arguing for so far.
1) Intelligence is one's capability to perceive or calculate the relations between variables within a given (shortest) time.
2) Intelligence has evolved only because it would further our survival.
3) We care about furthering our survival only because we are mortal organisms, only because we dread our death/ finality in terms of time.
4) We dread our own death because we have a biological component which takes pleasure out of certain sense-impressions and their prolongation as well as a psychological component which harbors certain desires and the prospect of their fulfillment in the future.
5) An AI machine, by definition, does not have a psycho-biological component; if it did, it would be a clone and not an AI machine.
6) A perfectly viable AI machine would not be concerned(!) with its finite existence as it does not have any desires, nor does it take pleasure out of certain sense-impressions.
7) An AI machine would not be in need(!) of Intelligence to begin with unless it is programmed to be concerned about survival; if it were programmed as such, it would be more of an indentured robot and less of an self-sufficient Artificial Intelligence.
ERGO,
8) AI, in its most ambitious conception, is not possible
As someone who is more interested in the philosophical implications of AI, I would very much appreciate your opinions or objections to such line of argument against AI.
I would also like to have your thoughts as well on my (above) argument against strong AI and I would appreciate any papers you might suggest, papers which are somehow related to this line of argument.
I very much appreciate your commitment to a deep discussion that I also like very much. And thank you for your positive attitude towards my reasoning.
The third formulation of ‘SAI’ is actually what we are pursuing – but in a different way. We work in the field of second generation (in other words smart) cyber-physical systems with the aim to develop system-level smartness. This is known to be a challenging compositional characteristic of CPSs that cannot be associated with the operation of any particular component of these systems, but can also be not implemented without a synergetic cooperation of the components, and, furthermore, without considering purposeful interactions of these systems as a whole with their environments. To move towards this destination, we developed the concept of procedural abduction (PA) and we are working on its implementation in different pilot systems, expected to behave as smart CPSs. The attached file gives an insight into the conceptual fundamentals and the computational approach of PA. Obviously, comments/questions are welcome concerning these. I must admit I cannot answer your brilliant question, namely: “if a sufficiently advanced AI could ‘reason’ self-sufficiently, i.e. without being assisted by human intervention”. Not only the technological issues and knowledge gaps keep me back to take a position, but also the notion and criteria of 'self-sufficiency'. We hope to know more about it, but there are no low hanging fruits ...
You addressed the issue of "fault tolerant logical reasoning". Most probably I have a very unique position concerning this. Leaning from ‘logical’ faults, or whatever kinds of faults (e.g. from those originating in semantic misinterpretation) is important and useful for practicing human intelligence, and this is probably also indispensable for system intelligence. You argued properly that humans resolve the above not wishful situations based on ‘intangibles’. But how can systems that are in lack of these assets correct reasoning inconsistences? I do not know the answer, even do not have a clue, but I am very much interested in knowing more about it (namely about, reusing your term, “how exactly would an advanced AI react to such logical contradictions”. Transcendent elements are not necessarily useful in the argumentation.
You mentioned “I had already formulated my concern about strong AI in another question in terms of yet another problematic topic in Al, Intelligence,…” Let me react briefly on your propositions:
“1) Intelligence is one's capability to perceive or calculate the relations between variables within a given (shortest) time”. This reads as an oversimplification for me. (Totaled human) intelligence is that complexity that we cannot even model in its entirety.
“2) Intelligence has evolved only because it would further our survival.” This argument has an interesting but debatable kernel. Has intelligence evolved under the pressure of surviving, or has intelligence has gone through a chance-enabled evolution and simultaneously enabled survival?
“3) We care about furthering our survival only because we are mortal organisms, only because we dread our death/ finality in terms of time.” This is a topic for philosophers.
“4) We dread our own death because we have a biological component which takes pleasure out of certain sense-impressions and their prolongation as well as a psychological component which harbors certain desires and the prospect of their fulfillment in the future.” Do we really dread? But, anyway, and more importantly, what the above two issues/statements have to do with SAI?
“5) An AI machine, by definition, does not have a psycho-biological component; if it did (had?), it would be a clone and not an AI machine.” Yes, I agree, I also see a different teleology.
“6) A perfectly viable AI machine would not be concerned(!) with its finite existence as it does not have any desires, nor does it take pleasure out of certain sense-impressions.” It is a too-large-things-in-one claim for me. What is the definition of “A perfectly viable AI machine”? What does in mean “concerned” in this context? (This again takes me to the debated field of teleology.) But, I do not exclude it considering future system capabilities and technological affordances.
“7) An AI machine would not be in need(!) of Intelligence to begin with unless it is programmed to be concerned about survival; if it were programmed as such, it would be more of an indentured robot and less of an self-sufficient Artificial Intelligence.”Again, why do you use capitalization on Intelligence? Do you use it in a specific meaning? “… programmed to be concerned about survival …” sorry, this is too abstract for me …
“8) AI, in its most ambitious conception, is not possible” If you think of a perfect (100 percent) replica of (the totaled) human individual, team, community, society and mankind intelligence (and these together and in natural association), then I tend to agree with you.
The last comment: why should we argue against AI? We need to learn how to live with it properly and beneficially.
I just ponder on your use of capitalized I in intelligence. As far as human-associated intelligence is concerned, we can in principle differentiate four manifestations:
2. team/group intelligence (the active capacity of purpose-sharing individual members of a team/group to learn, teach, communicate, reason and think together, irrespective of position in hierarchy, in the service of realizing shared goals and a shared mission). (Suzanne Gordon)
3. community/population intelligence (the capacity of a community, an organization, or a population to understand, address and ameliorate conditions that impact community objectives and well-being by (i) responding in creative and appropriate ways, and (ii) evaluating its policy and programmatic initiatives). (Stephen Randal Henry)
4. mankind's intelligence (actually the collective intellectual potential of humanity on the largest possible scale and in the widest possible meaning).
Do you think of the last manifestation of intelligence when you use Intelligence?
One question is how closely artificial intelligence can come to the way in which human minds work, especially when making moral judgements. On this, see especially: Joshua Knobe (2010). Person as Scientist, Person as Moralist. Behavioral and Brain Sciences. 33: 315-329. There's a very useful bibliography.
I think this psychological/philosophical angle might well be useful in developing your notion of 'fault-tolerant' reasoning. What is 'reasoning'? How might it be 'faulty'? How would we know whether or not it is 'faulty'?
And a second issue.
There are some interesting thoughts on AI in law. AI has been used so far to replicate what legal experts would do. However, the possibility emerges that AI might be able to outstrip the experts. On this, see Susskind and Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts, Oxford University Press 2015 and a podcast that introduces the issues: https://www.gresham.ac.uk/lectures-and-events/what-will-happen-when-artificial-intelligence-and-the-internet-meet-the-professions
Thanks for a very elaborate and precise analysis of the premises of my argument.
The core of my argument against AI is the idea that Intelligence presumes speed and speed presumes time; in other words, by definition, there is a temporal component to all manifestations of Intelligence--and I capitalize the word as I believe that all "intelligences" share a common essence in terms of their time-boundedness.
There are, of course, attributes/ character traits which are not time-bound i.e. they do not lose their meaning either in the absence of time or its overabundance thereof. "Sam is taller than Tom" would make as much sense in an immortal universe (where the two live for ever) as in our current mortal condition. Furthermore, even if Time froze, we could still sensibly say that " Sam is taller than Tom". Yet, I argue that in an immortal world with immortal human beings it would not makes sense to say that " Sam is more intelligent than Tom" because given enough time every normal ( not mentally impaired) human being can solve a complex series of problems or come up with an ingenious invention, although I would also argue that the very idea of "solving a problem" itself is meaningful only to mortal beings;honestly, what "problem" an immortal human could possibly have? That is, the very concept of having a "problem" is time-bound as well. All "problems" have something to do with the possibility of our survival, don't they?
Logically speaking then, it follows that the reason we honor highly-intelligent individuals, geniuses, is because they solve or dissolve our "problems" faster and thus help us live some more years, experience more pleasure or have more power. That is, in an immortal world with immortal human beings, there would be no difference between Einstein and a math idiot.
Upon further philosophical investigation, therefore, "intelligence" and " problem" turn out to be what they are because of the time variable, and the time variable is so valuable for us because we harbor a perennial "will" to live on,an as yet mysterious force to avoid death. The AI community has been consistently ignoring the issue of "will" because it would simply not fit in with the dominant computationalist /cognitivist paradigm: to "program" an AI to "will" or "desire" something is a contradiction in terms; it is impossible, again by definition, to formalize one's will to live since such "will" is the background against which and for which all formalization takes place.
I'm well-aware that within the AI community there is no ultimate consensus as to what Intelligence is really about. The mainstream definitions, like Turing's, leave Intelligence per se undefined; Other accounts define Intelligence in terms of calculation in its broadest sense.Psychologists like Gardner define Intelligence in terms of "solving problems" ,coping strategies or adaptability. We know what Intelligence does, what benefits it yields, what organisms have it, how many types of it there are, yet we have no clear idea what Intelligence is in and of itself. More decisively, I find all such definitions insufficient since they do not let in the time variable, which as I briefly demonstrated, is the "essence", metaphysically speaking, of Intelligence: all forms of intelligence you pointed out in your previous answer presume rapidity; a person who takes a few minutes to come up with an ingenious improve joke is, according to Gardner, less "verbal-linguistically" intelligent than, say, Woody Allen.
Admittedly, my previous definition of Intelligence takes the verb " calculate" for granted as, for the most part, I was solely trying to temporalize Intelligence.
Thus, I put forward a more refined version of my tentative definition:
Intelligence is an organic agent's cognitive capability to rapidly (as rapidly as possibly) calculate, that is to spot identities between significantly relevant variables in an unfamiliar environment in order to achieve a desired goal.
If ,as I think, this formulation of Intelligence is true, then the two phrases " significantly relevant variables" and " as rapidly as possible" would pose a serious challenge to strong AI. The so-called " frame problem" extensively discussed by Hubert Dreyfus targets the question of "relevance"--how can an AI autonomously prefer or choose to perceive one fact/variable over another. Nonetheless, the question of time-boundedness--how an AI which is not bound by time, does not need to reside within time (unless rigged so), and hence is indifferent towards the prolongation of its existence ( does not have a "will"), would need and wield Intelligence, remains for the most part unexplored.
Roger Penrose "New clothes of king", could help. He give us the conditions that must be met by the AI to be recognized as conscious, ie it speaks from the perspective of a contemporary mind-consciousness, although it is a bit too formal for first year students.