self explanatory perhaps , but imho, misconceived.
memorably, philospoher John Haugeland observed that 'true AI' is a 'can't get there from here' problem. The 'problem', he said, is that computers 'don't give a damn' - about anything. How can we get a computer to 'give a damn'? We have no idea how to even start. We can fake it, no doubt.
I think you need to define imagination. When you profer a definition, we'll have something to talk about.
Human imagination evolved from primate and mammalian imagination which evolve from, etc back to the origin of life. Biologist do not know much about all of this but it belong to the realm of life. Robots are made by people that know even less about that than biologists and most of them do not even care to learn about imagination when they try to conceive robot. Robots do not have any will of their own or anything remotly related to it. Robots are not conscious in any way. Robot are not intelligent in any way remotly related to what animal intelligence is. They are action reaction devices, the exact anti-thesis of what is intelligence. Not more intelligent or conscious than a door knob or a rock . Whenever they do that follow an algorithm is by definition not intelligent. Conceiving such algorithm (robot) require intelligence and imagination but the enaction of the algorithm by definition is not intelligent. Inventing a good food recipe requires cooking imagination, following the recipe don't. Since robots act according to a design, by definition of what they are, are not intelligent. Strictly speaking do not even act in the sense an animal or a human does. It is the act of the creator the robot. An action reaction devices is by definition not intelligent.
Why do I have to say such evidences. Why is it that people beleive that robot are already intelligent or almost on the edge of becoming more intelligent than humans. This myth has been propagated by more than one hundred years of science fiction on robots. It is I think built in the archaic imagination of people to project themself into ficticioius entities. A few thousand years back, they called these entities Gods and now they call them AI robots. We have this propensity to anthropomophise and the modern era is based on the myth of the world as a machine and so the modern gods have to be machines. This is frankly pathetic downfall of imagination,
I fully agree with Louis and Simon I believe their answers are exhaustive. I would just like to say that when it is said that robots are "self-learning", means that the robot is storing data on the position of objects and obstacles or whatever on the surrounding environment, that is, it is storing technical data that helps him move without encountering obstacles etc.. The robot is a computer that maybe walks or does other things. Everything that computers can do is "decided" by those who designed and built it, that is, by the one who wrote the software program that runs the computer. In movies things are different generally, man is a fool and intelligent robot. But the end of the film is always the same "the man cuts the electric cable that powers the robot and wins."
I think that people are romanticizing human intelligence too much. In my view, a high-level process such as imagination emerges from lower level processes. It appears complex because we are looking at it from our own point of view.
A good example is the famous Braitenberg’s vehicle [1]. It is an agent composed of 2 light sensors in the front and 2 wheels, each driven by a motor. Each sensor is directly connected to the motor on the opposite side, so that an increase in the sensed light would produce directly an increase in the rotation of the opposite wheel.
This simple agent displays a light-following behavior, even though no calculation is made to do so. Furthermore, from outside, the behavior of such an agent appears as complex, as light in the real world is a noisy signal, and changes throughout the day. The complexity of the behavior is an emergent property of the triad action-perception-environment.
I don’t have a clear answer to your question, but to me, imagination is also an emergent property of such low-level processes. I agree that we are still far from having intelligent robots, but I think that it is a technical limit, not a philosophical one. With appropriate embodied learning systems, I think that robots could theoretically have imagination one day, although it would be specific to their own sensory-motor experience of the world.
Note:
[1] Braitenberg, V. (1986). Vehicles: Experiments in synthetic psychology. MIT press.
You can find here an online interactive simulation of Braitenberg’s vehicles: https://scratch.mit.edu/projects/26099509/
One of the greatest expressions of human intelligence is philosophy. Notwithstanding the fact that every philosopher has his definition of philosophy, and I am not a philosopher. The definition of philosophy I like most is that philosophy is "pure thought" and its primary goal is to explain how it is done and how the universe works. Until a robot invents and writes a new philosophy treaty it is and will always remain a machine even if it is able to move and carry out actions autonomously. Or, since a robot can certainly color canvas pieces, as long as, it not: will imagine, will invented and will paint a artwork like Paplo Picasso's Guernica, it is and will always be a machine.
Braitenberg reproduced in theory the experiments Gray Walter did in practice 20 years earlier. I've never understood why Braitenberg was not taken to task for such plagiarism or ignorance of prior art.
>This simple agent displays a light-following behavior, even though no calculation is made to do so. Furthermore, from outside, the behavior of such an agent appears as complex, as light in the real world is a noisy signal, and changes throughout the day. The complexity of the behavior is an emergent property of the triad action-perception-environment.
I couldn't agree more - your passage is a classic cybernetic or postcogntivist interpretation, two schools which bookend classical AI with theories of cognition opposed to (good old fashioned) AI. I'm not sure where the OP places himself on this spectrum, but I also do not understand what bearing your explanation has on his question.
> a high-level process such as imagination emerges from lower level processes
undoubtedly true, as a result of complex associative and parallel processes on fractal scales we find hard to imagine, do not yet understand, nor do we have any idea of how to simulate.
The von neumann serial reasoning machine is so absolutely unlike human thinking that the originators of the term Artificial Intelligence should be jailed for fraud. (imho :)
I now see a third way the OPs question was (imho) misconceived . "Will intelligent robots..." presupposes that robots are , or will be intelligent. I think this assertion begs the whole question.
I do'nt think it is acceptable to draw an analogy between ''imagination'' and listing the whole realm of possibilities within a formal world under a certain formal constraint. Poincare mentioned the fact that the mathematicians mostly deal with realm of possibilities that are either extremely large or infinite but find mostly unconsciously through their imagination a mean to converge and nobody knows how but it is certainly not throw explicitly considering all the possibilities which is formally impossible into a finite time. Many scientists and mathematicians mention aesthetic consideration leading them towards their break throw and this is not remotly related in their consciousness to browsing all possibilities.
I have not yet read all the answers above carefully, but shall do so after I more-impulsively give some feedback of my own. This does address, as much as reasonable, the definition of "imagination". Obviously, imagination is openness to possibilities in experience (whether you make then happen, or discovery them or think you may discover them/ do them). For a human this openness is great, but it is not without any parameters whatsoever. It may well, though, be without any parameters we can imagine, BUT which we might now be able to discover.
Always, in some very general sense, the human is 'goal'-oriented, to get something done, to progress, to fill needs or desires OR to return to some homeostatic state. Obviously this embraces a lot, yet we have to ask: HOW CAN WE EMBRACE THAT? For an empiricist, the answer is always that at KEY POINTS, directly observable environmental aspects are always involved as proximate causes, but so are behavior patterns [ though NOT patterns we can imagine in advance (and, if the adult after-the-fact cannot imagine them, how can the developing child THOUGH HE SHOWS THESE VERY BEHAVIORS?). ] HOW CAN THERE BE such behavior patterns also involved (along with environmental aspects) as DIRECTLY OBSERVABLE proximate causes of KEY behavioral change? The only likely, sensible, mature, biologically-consistent type answers (and perhaps, indeed the ONLY POSSIBLE ANSWERS) involve "innate guidance to behaviors seen in behavior patterns." Like Sherlock Holmes, you can come to this conclusion if just only by exclusion of other imagined "possibilities".
BUT, if we cannot imagine what is involved (as I have indicated), then how can we model it? We can't. BUT, WE MAY BE ABLE TO DISCOVER THEM: by seeing things we have never seen before, _AS_ [/how] WE HAVE NEVER SEEN THEM BEFORE ! It could be that the inceptions of new types/levels of learnings could be rooted in simple perceptual or perceptual/attentional "shifts" [ and such small changes in developing attentions, IN AN OTHERWISE ALREADY ADAPTED COMPLEX OF BEHAVIORS, could well suffice for the major changes in perspectives yielding (and being) the inception of new types of learning, unfolding into new abilities of abstraction ]. An empiricist cannot abandon the POSSIBILITY of such concrete signs (and what would Sherlock Holmes say?) How can we see what we have not seen, when it cannot be imagined? Of course, the answer is to "see" (yet it is seeing) IN NEW WAYS, USING NEW TECHNOLOGY, giving us a VIEW we have never had before and could not have without assistance. Two technologies ripe to work together to see what NOW can be seen are: eye-tracking technologies and computer-assisted analysis software. Yet, let me quickly say, though, that not even those technologies will likely yield results-seen, except by those with a most educated, learned and principled biologically-congruent) perspective. I , myself (like the rest of us), have only been able to imagine (OF COURSE) the possible nature of these "perceptual shifts" INDIRECTLY BY the species-typical RESULTS they yield (the CONSEQUENCES and ramifications of the new possible types of learning and levels of thinking), and this is what I outline in my paper, "A Human Ethogram ...". From this, though, a wise, learned person, using these new technologies, MAY be able to imagine when and where to look for the innately-driven patterns of behavior OR at least the new aspects of the environment which become subjects of attention (and new aspects of what is worked on in working memory) -- with a necessary understanding of earlier cognitive ontology, AND A FULL APPRECIATION OF THE contextualization of cognition (both simple and complex) BROUGHT FORWARD from our memory capacities. [ The huge possibilities of our visual-spacial memory, along with our declarative and procedural memories, contextualizing the episodic buffer and working memory are awesome ; it is also the great possibilities of these Memories which make it quite plausible that a mere "perceptual shifts" in an otherwise adapted complex could well suffice for KEY behavior patterns changes (new learnings, yielding awesome new abilities -- including abstract thought). ]
Now, everyone always asks me, when I give my "Answers": What does this have to do with the Question? Well, friends, THESE are the very open-type parameters which, though amazing and hard to discover AND VERY OPEN, do nonetheless operate in (and DELIMIT) human learning and development, INCLUDING ALLOWING FOR (and being the basis of) IMAGINATION -- by the way: of course: such covert behaviors (as imagining) are part of our understanding of the very important (contextualizing) covert behaviors that ARE VITAL PARTS OF BEHAVIOR PATTERNS , themselves, in key environmental circumstances EVEN AS they (those patterns) develop through another stage. (Also, for the relevance of my answer, see the P.S., at the bottom.)
One more thing that makes all this hard is that it involves replacing some core 'assumptions' that, though baseless, groundless, without any foundation and needless (unjustified) ARE NONETHELESS WHAT MOST PSYCHOLOGISTS (and the rest of people) BELIEVE and this results in the absolute INABILITY TO IMAGINE BEING HELPED TO SEE MORE, because of the nature of what THAT "more" would have to be: in particular, innately-driven. Here are some of the worst commonly-held (baseless) 'assumptions':
(1) All that is significant and innate is present at birth .
(2) The more learning there is, the less innate guidance -- this taken to mean: OF ANY SORT.
Both of these assumptions can be justifiably replaced by THEIR OPPOSITES -- and that is more consistent with biology (and behavior IS biological functioning) and more-likely true. [ (Number (2) may be seen only partially replaced by an "opposite".) ]
ALSO, there is this good "sign":
Abandoning these false pseudo-'assumptions'/presumptions also totally eliminates the nature/nurture debate OR any duality there at all. THAT duality is not only not likely, but it is likely that innate aspects of behaviors are AT LEAST IN EFFECT simultaneously present IN behavior patterns (yes, even those patterns that are most deliberate/conscious and INVOLVE OUR ATTENTION !! -- which is the core of what I have been talking about, above). [ For decades it has been known that there is no foundation for a nature/nurture duality, and this viewpoint GETS YOU OUT OF IT ! ]
The starting point for further understanding the full justification of my perspective is: "A Human Ethogram ..." AND I have explicated this view in HUNDREDS of related essays, in Questions and Answers -- here on researchgate (start at the Profile, click Contributions, the finally CLICK Questions and CLICK Answers). Here is a link to "A Human Ethogram...":
Article A Human Ethogram: Its Scientific Acceptability and Importanc...
P.S. All this not only provides concrete foundations for cognitive science, but also similarly for artificial intelligence.
What I see here in some of the answers is itself a failure of imagination. No one anymore seriously suggests that intelligent robots of the future (unless WW III prevents us from getting there at all) will be much like present-day robots and computers. Some future robots would likely have neural networks built from organic/biological components cultured in a lab (present-day lab-cultured meat may be just the beginning).
Imagination can never be a technique. For imagination we need consciousness and this is bio-physics. Computers will never be possible to imagine, this is misuse of the language.
''Some future robots would likely have neural networks built from organic/biological components cultured in a lab (present-day lab-cultured meat may be just the beginning).''
This does not fall under the definition of a robot. This is bio-engineering of biological organism. Modifying the living cannot be called robotic. As soon as one is dealing with biological organism, intelligent on top of that then we enter the freeikish Frankeinstein domain where any sense of ethic is left behind. That would be a return (I never really stop) to slavery. The whole notion of the robot was about a purely mecanical slave. The fact that it is a mindless, consciousless device remove the ethical concern.
What I envisage would not be a modification of a living organism (creating a cyborg) nor the bioengineering of a whole organism. Rather, it would be a case of incorporating some small snippets of biological material into hardware components (e.g. into chips, in present-day technology). If you don’t want to call a largely mechanical-cum-electronic android with some biological snippets a robot , fine, but remember terms may evolve to extend to new circumstances (e.g. a keypad is not a dial, but we still "dial" phones nonetheless).
What I see here in some of the answers is itself a failure of imagination. No one anymore seriously suggests that intelligent robots of the future (unless WW III prevents us from getting there at all) will be much like present-day robots and computers. Some future robots would likely have neural networks built from organic/biological components cultured in a lab (present-day lab-cultured meat may be just the beginning).
What I see is an excess of imagination of a particularly technofetishist kind.
> No one anymore seriously suggests that intelligent robots of the future
lets slow down and define what we mean by 'intelligent' . imho we haven't made much progress on this since the great AI crash of '89.
>(unless WW III prevents us from getting there at all)
or sea level rise, or increasingly severe storms which destroy infrastructure and economies, or endocrine disrupters reducing human reproductive capacity, or the rise of some plague which is just a bit more virulent than ebola, or lack of fresh water, or....
Lets face it, technoculture could grind to a halt any day
> Some future robots would likely have neural networks
quite possibly, some have already. But will that make them 'intelligent'?
> built from organic/biological components
I've read about DNA based storage and using highly sophisticated molecular process to do dumb boolean procedures. etc. But if all we're doing is making smaller faster von Neumann machines with biological components, we're still making smaller faster von Neumann machines, which cam only do what von Neumann machines can do.
The point is, there are fundamental qualities of the kind of 'intelligence' we identify in people, which we have NO IDEA how to implement in logical machine - empathy, compassion, creativity, imagination, etc. And maybe it would be a bad idea anyway. David Copes EMI composes passable Bach cantatas, its a cute experiment in style grammars but do we really need an infinite number of Bach cantatas? Do you really want smart car that has its own opinions about where it should take you? Why would anyone want this?
RE: What I see is an excess of imagination of a particularly technofetishist kind.
I think of it as more of a techno-Maoism: “Let a hundred flowers bloom….” ;)
RE: Lets face it, technoculture could grind to a halt any day
Indeed. I wryly suggested WW III, but actually I am more concerned about the possibility of a Carrington Event. We’d better make sure our electronic archives are backed up with hard copy. Sadly, many libraries have pulped their paper journal backsets.
RE: The point is, there are fundamental qualities of the kind of 'intelligence' we identify in people, which we have NO IDEA how to implement in logical machine - empathy, compassion, creativity, imagination, etc.
We don’t “implement” those in humans either. Humans have somesuch innately to some degree and we can enhance these over time by just interacting with humans in certain ways. Anyway, if the hardware-cum-wetware closely copies human wetware processes and our interacting with them in those same ways leads to the appropriate behaviors, we wouldn’t have to understand why it works anymore than we do in the human case.
RE: Do you really want smart car that has its own opinions about where it should take you?
Of course not, and for the same reason that I wouldn’t want a human cabdriver like that.
II's a fact that we don't know the imagination of those who are working at the 'robot with imagination'. It's also true that we don't know what a machine can do with the help of human neurons. Nobody knows, but it can become the most effective weapon, we can even imagine the end of the world. Imagination is just a word that has a different meaning for all of us. it is the task of the artist to show the world that the muse will be at the side of those who know where to find her and she will not be hidden in an electronic mechanic system but in the bio-centered consciousness of those who are elected.
Well ... Suppose a human chess player is playing a chess game at world championship level against a computer. Entirely possible these days! Among other things, both players will consider (imagine?) many possible sequences of moves she/it might make and make choices between the possible moves open to she/it.
But it seems that most colleagues who answered my question will just about allow the word imagine for the human player, but definitely NOT for the computer program.
Can someone explain why there is this difference? Just hardware versus wetware? Or brain v mind? Or is it just that we think we know machines-robots are inherently inferior even though we definitely don't know the future? This last possibly seems to me a rather old-fashioned -- bit like saying machines can't possibly fly .
When playing chess or dealing with formal problem , you can list number of possibilities in your imagination and we can design computer algorithm that do so much faster and extensively than any human can do but human imagination can do other sort of things than exploring extensively realm of possibilities when solving formal problems. Just about a few of these other possibilities; it can reality that the formal problem can be reformulated into another form more easily solved; it can say what the heck why should I play this game in the first place and do more rewarding things instead; it can say what the heck, lets write computer program doing the dam stupid thing; it can be use to anwer your question and trying finding out what imagination is because it is intrinsically reflexive; and because it is reflexive it knows that it is not mecanical (this is the answer to your question) but it cannot gives a mecanical anwer to it which is your demand; if imagination was mecanical or specifiable then it would not be imagination, i.e. reflexive and creative; providing such answer would be antithesis to it although all such answers are produced by it; it can use totally other avenues which we are not even conscious off but are built-in our biological bodies and which converge to solution in infinite realm of possibilities (See Poincare on this); many mathetical conjecture which hundred of years later were established as theorem had been found without the mathematician knowing how he knew it was true. Fermat's conjecture has been proven just recently. Most of what we do in normal life, not in formal realm space is done imaginatly in ways that we have not remotly a clue how we do it and this is typical of human imagination best examplified by our artists.
P.S.
Lets return to game playing. As a kid I learned to play tic tac toe. At first it was great fun to try to win. But gradually I learned a few rules that was garantying to I could not loose and only win against those that did not yet learned the rule. At that point I was not playing anymore but acting as a computer and it was not a game anymore, I did not need my imagination and basically I stopped playing when I started being a machine because it was no fun anymore.
[ I have read the well-regarded book Theoretical Foundations of Artificial General Intelligence (2012) and several other things on AI, so I DO KNOW what I am talking about there; my own field is psychology, developmental psychology (esp. cognitive) -- so I know that too. I was a very early cognitive-developmental human ethologist.]
Let me try to be more direct in expression of my view, hopefully making it clear how certain new ways of investigating and of thinking (via new discoveries) DO RELATE TO REAL AI: (I guess I should say to start out that: TODAY we cannot properly call any acts of a robot meaningfully related to anything one could call "imagination" -- BUT humans (psychology researchers/theorists) do not well understand imagination in the human either, and therefore clearly will not be able to simulate it.) Here is an overview of the details:
If we get results and findings giving us the further needed foundations of cognition and cognitive development, THEN: BECAUSE these are concretely based (at their inception) _AND_ all significant covert behaviors ** still clearly relate to EARLIER behavior patterns/environmental aspects which initially yielded clear overt behavioral changes [(and which the eventually-resultant covert behaviors (patterning) can still be seen as LIKE (when they were concretely-based), and thus are now justifiably inferred)], we can simulate all that concretely based stuff and the related covert and overt resultant behaviors and thus fully cognitively simulate the human -- which includes imagination via the final possible states of working memory.
The definition of all other important things (motives and emotions) are all reliant on the developing and developed functional cognitive structures we must come to better understand and better know, SO the role of motives and emotions can be understood in those terms and needed species-typical biases in the salience of memories and imagined goals and responsivenesses then also simulated (as appropriate at each stage of development).
Thus, imagination in a robot could be possible. BUT if we cannot recognize we need more foundational knowledge of cognitive development AND that we likely have to use technologies (eye-tracking and computer-assisted analysis) to see and find things we cannot otherwise (normally) parse out and see (distinctly or separately) at all THEN there will never be meaningful imagination in a robot NOR will we well-understand the human, even in its key basic regards. We must realize psychology IS still an infant science and must start anew with new methods to see new things and then finally understand key basic things (all thoughts and assumptions contrary to this view are counter-productive -- and will never work for a really good constructive view -- and must be over-come with acknowledgement of real and likely possibilities). I will again refer all to papers under the 2 Projects ( https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology and https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory ) AND to the hundreds of related Questions and Answers (essays) I have made on researchgate for A LOT of explication.
For more on what psychology needs (and how current problems in the field show that more foundation is needed, and of the sort I propose, and which is now possible to investigate and get findings on), _and_ which is basically, at the same time, ABOUT the high quality concrete knowledge which real AI needs, see the Question _and_ Answer to :
This may provide a little more detail and perspective. Let me also add that getting "up-to-speed" with the current relevant knowledge and classic theories of psychology, and coming to an understanding all that you would need to understand in extant psychology IS NOT A HUGE TASK. You could put together an AI team with some providing the needed psychology background, even without employing professional psychology researchers -- it would likely have to be a team, though. AND, I suppose the team may have to include the finest among those seeking real AI (AGI), such as Thorisson.
** FOOTNOTE: This requires a full understanding of the Memory capacities and the various necessary species-typical type contents needed at each stage of development, for operation there AND, in the proper adaptive circumstances, providing a BIG part of the CONTEXT for those new developments and subsequent new types of learning (and eventually new ways of thinking).
I have just today (Thursday, Nov. 2 11a.m. US CST) updated the last Answer, above, in some possibly very helpful ways. I did not think the additional direction I try to provide needed its own separate Answer, and too much preface would would have to be involved (incl. repeated "stuff"), if I made it a new "Answer" -- yet I am alerting you to its existence (with this new post). So, just see the newly-added highlighted "stuff", at the end of my previous Answer, to see the new "stuff": leading you to an essay showing how psychology's needs for new concrete foundations ALSO would result in findings that (I think very clearly/obviously) would concretely fulfill AI's needs.
‘’ TODAY we cannot properly call any acts of a robot meaningfully related to anything one could call "imagination" –. OK
BUT humans (psychology researchers/theorists) do not well understand imagination in the human either, and therefore clearly will not be able to simulate it.) ‘’. OK
‘’ and thus are now justifiably inferred)], we can simulate all that concretely based stuff and the related covert and overt resultant behaviors and thus fully cognitively simulate the human -- which includes imagination via the final possible states of working memory.’’
IF imagination and IF the living WAS a MACHINE, i.e. reductive in principle to understandable process and IF we would managed to discovered these mechanical processes. THEN YES what you SAID would be possible BUT NONE OF THE IF is true and so the conclusion is uhtrue.. The thesis is ALL is MACHINE LIKE.. Although I totally agree that all that is in science and will ever be in science is machine like it does not means that most of concrete reality is reducible. IN fact none of it is reducible. Scientific abstractions all to control the world, is objective and true but is not concrete, does not exist concretely but we can build concrete machine following these abstract principlles. We can also discovered a lot about organisms and the physiology of organism but none of that is remotely on the same level as living which is not abstract, cannot be written on the page, has agency and exist for itself and cannot be simulated in principle. The very basis of science exclude a priori in its realm of abstraction anything other then machine-like… By 6 months of age babies makes the animates-inanimate distinction, science will never make it, all is necessarily inanimate in its domain. It is not a question of advancement of science but a question of what is science. So we disagree at the level of the philosophy of science, on what science is or to the status of a scientific abstract knowledge vis a vis concrete reality.
Taking into account that there are at least two basic forms of imagination, the answer would be yes, in one of them.
We have creative imagination, and associative imagination. Robots will be able some time soon to have associative imagination. Which is not a minor achievement
You say some nice things, then you say: "...THEN YES what you SAID would be possible BUT NONE OF THE IF is true and so the conclusion is untrue " (end quote). WHAT I JUST QUOTED (of you):
This is very simply an unjustifiable view you have, for a very clear, scientifically indisputable reason: MY VIEW IN ITS Entirety AND, at each point, is TOTALLY, TESTABLE/VERFIABLE . PERIOD. You might benefit from seeing my most recent Question (with my Answer, there) see: https://www.researchgate.net/post/Have_things_having_the_role_of_a_MICROSCOPE_for_psychology_been_developed_which_may_be_used_for_investigation_of_important_observational_specifics? ; you are very simply "sticking yourself" in the past, with just what you already "know".
I. I agree with Carlos' first paragraph, and would welcome an expansion of his second paragraph.
II. I've tried to put together some more nuanced (and hopefully bridge-building and peace-making) words that a majority might be able to go along with -- but somehow I doubt it! Anyway, they follow:
We are (or include) biological systems designed by evolution (itself designed by ??) to possess many abilities including the various types of imagination.Thus it will be not too surprising if we can design other non-biological systems with at least some aspects of imagination (or more precisely the corresponding behavior).
Living organisms did evolved. They are not like human-made systems and they were not design as human design their systems , organisms did evolved but we cannot keep the human analogy of design which is a created-creator metaphor which many creationism like but which is not really a good analogy. Human invented their culture, invented many technologies and constructed many buildings and machines with these technologies but we never build a natural entities of the type nature evolved. Never. It took the billion of years for these to evolve and we usually take less than a year to construct our systems and we never started from scratch which Nature did. It would be rather surprising, I would say a pure miracle if ''on-biological systems with at least some aspects of imagination '' . It would be even a miracle if we would design an eletron? Imagine to invent something that Nature took billion of years onto a whole planet. I can't imagine that.
This is just to draw attention (hopefully) to the second half of my question which hasn't had much attention. However, I note Laura's remark:
" I think that robots could theoretically have imagination one day, although it would be specific to their own sensory-motor experience of the world. " (my italics) @Laura_Cohen
which I tend to agree with.
I also agree with Brad to at least some degree:
"Thus, imagination in a robot could be possible. BUT if we cannot recognize we need more foundational knowledge of cognitive development AND that we likely have to use technologies (eye-tracking and computer-assisted analysis) to see and find things we cannot otherwise (normally) parse out and see (distinctly or separately) at all THEN there will never be meaningful imagination in a robot NOR will we well-understand the human, even in its key basic regards" @Brad_Jesness.
which is perhaps also relevant to the second part of my question.
See also my HOOP project (most recent updates) in which I am interested in the origin's of metaphysical belief systems (eg Gillian's Hoop) and am currently trying to design in some detail an AI system (see standard textbooks of AI, not the current wild commercially driven hype -- and not necessarily running on a computer, of course) which in principle could imagine/create (standard dictionary definition, here, note) a paragraph long story (been done of course!) the concept of a green man, and Gillian's Hoop.
NB it's NOT a matter of scanning all possibilities. AI systems have always been cleverer than that from Turing onwards. Consider heuristically guided search eg chess playing, constraint satisfaction problem-solving (practical applications!) and route finding (Sat Nav!) .
Anyone have insights into the social and cognitive origins of Gillian's Hoop?
I recommend the very recent novel "Klara and the Sun" for its attempt to imagine an android Klara's view of its (human populated) world with emphasis on what the android itself is perhaps able to imagine. By disinguished Japanese born author and Literature Nobel Prize winner Kazuo Ishiguro. Fascinating and thought provoking!
The book "Conciousness: Confessions of a Romantic Reductionist:" by Christof Koch, MIT Press paperback, 2017 (originally published MIT 2012), I find highly relevant to this discussion and also very readable. Very thought provoking. I recommend it
This amazingly late remark is just to to suggest that ChatbotGPT and its rivals and developments do NOT have non-trivial originality and therefore do NOT have true imagination, the essence of which is to envisage something new without reference to anyone else's ideas.