Stephen Hawking has famously defined intelligence as " the ability to adapt to change"; a more precise definition ( which in a sense subsumes Hawking's definition as well) would, I think, read :
"Intelligence is the ability of a living organism to perceive or calculate the relations between (basically) two variables in time (within the shortest possible time-span)."
Given the temporal qualification, one important implication of this definition would be that Intelligence as such has evolved only because we are mortal i.e. it is because we know that time is "running out" that we have, at some point in our natural history, have felt the need to speed up spotting the relations between phenomena. That is, in principle, in an immortal universe there would be no difference between Einstein and and a idiot, for, given sufficiently enough time, anyone without a neuro-cognitive deficiency can solve a sophisticated physics problem.
I wonder if such a way of looking at Intelligence might have serious implications for the project of Artificial Intelligence?
what concerns me, therefore, is the following line of reasoning:
1) Intelligence is one's capability to perceive or calculate the relations between variables within a given (shortest) time.
2) Intelligence has evolved only because it would further our survival.
3) We care about furthering our survival only because we are mortal organisms, only because we dread our death/ finality in terms of time.
4) We dread our own death because we have a biological component which takes pleasure out of certain sense-impressions and their prolongation as well as a psychological component which harbors certain desires and the prospect of their fulfillment in the future.
5) An AI machine, by definition, does not have a psycho-biological component; if it did, it would be a clone and not an AI machine.
6) A perfectly viable AI machine would not be concerned(!) with its finite existence as it does not have any desires, nor does it take pleasure out of certain sense-impressions.
7) An AI machine would not be in need(!) of Intelligence to begin with unless it is programmed to be concerned about survival; if it were programmed as such, it would be more of a robot and less of an Artificial Intelligence.
ERGO,
8) AI, in its most ambitious conception, is not possible.
I would appreciate some comments and critiques as to the "possibility" of AI so that I could hopefully incorporate them into an article in the philosophy of AI.