I think no. It is just like we (human) don't know that the universe is limited or unlimited. We can not use a limited thing (machine) to discover/prove unlimited thing by experiments.
It is so easy to test it right now, do you have a smart phone? ask the SIRI, google now, cortana, alexa.... I am sure each one will answer your question which u will be surprised how much they can be smart!
Oh, what a nice deep philosophical question 😃 Let me start with the rhetorical question "what was Euclids motivation to proof that 'no largest prime number exists'? Why hasn't he simply continued to enumerate all prime numbers?"
Although we can only speculate, I think, given your setting, two answers would be plausible: 1. He became tired of enumerating or 2. he was aware that his life time as human is limited in order to waste his time enumerating.
I am not sure what you mean by recognize. Do you mean to perceive, to conclude or to proof?
In general one cannot perceive that something does not exist, one can just conclude it. E.g. you definitely will not perceive that there is "no alien" in your room, you just conclude it, since if there would be one, you would perceive it somehow. If you mean by recognize that the robot has some intuition "that since prime numbers are a subset of natural numbers and there is no largest natural number so there will not be a largest prime too", the robot could still be in doubt whether his intuition is correct, so it needs to start proving.
I think the robot would need some kind of motivation for triggering the creative step switching from enumerating to proving. Since you ask in the context of AI, I suppose this should be an intrinsic motivation and not a motivation explicitly pre-programmed, right?
Given that, let us consider the two motivations:
1. Becoming tired means that some kind of energy, either mental or physical, gets exhausted and that we become aware of it. Of course, we could program the robot so that he monitors his energy resources, but the robot probably will never "directly feel" that his resources vanish.
2. With the same argument the robot will also not "feel" that his life time is limited. Thinking about it, I believe that an intrinsic motivation for developing "intelligence" is eventually often the avoidance of pain. This directly connects living beings with the reality and what is possible and what should be avoided.
So, I argue that as long as the robot cannot perceive its "body", i.e. feel its "body", it will not be motivated to not waste his lifetime and start redefining the given task from an enumeration to a proof problem.
Well, I am no expert in automatic theorem proofing (although I have some background knowledge about it), I think, finding the actual proof could be mechanized, although it probably requires a lot of background knowledge of math, proof theory, etc.
I think it depends on the machine- on the way it is built or trained . If you train your AI that equips your robot in formal logic, it can conclude that there is not a largest prime (besides infinity) and any other known /demonstrated facts, but only if it is capable of abstracting. If the problem is formulated as " find the biggest prime number" your AI could be trained to use a number of axioms such as "there is no largest prime" along with " the maximum prime number value computable by this unit is....insert number" .If however it does not posses the capability to switch from an apparently practical problem (compute the largest prime) to the abstract theory (do not randomly start a prime computing algorithm and start computing but evaluate and recognize the largest prime is not computable) it could get stuck trying to find the max.
However to avoid this and other pitfalls , like any programming language or dedicated computing program, an AI should contain knowledge about approaching the infinities.A special set of rules, hard coded or learned , on how to approach division by zero, infinity, infinite maximum/minimum problems etc is very valuable to avoid paradoxes and idle running.Computing e, pi or other transcendental numbers is also a good example that a naive algorithm could compute till kingdom come and should be avoided by design/learning.
It should be the AI designer's intent to teach the general -purpose machine to avoid common thought pitfalls or how to avoid relentlessly pursuing the impossible.
Sci-fi authors have long courted the idea that advanced AI would reach such a state that it would easily surpass us but that it could/would be taught the way children are taught- okay, much faster and much more advanced stuff but still , using the same methods: guided exploration, examples, exercises, and..lessons in various aspects of science, art, culture, etc. If so, maybe an AI with the task of computing the largest prime would have reached the stage where it questions everything, and it could begin exploring by following the train of thought : what is a prime, how does one compute one, when does the algorithm end, oops... trap. Perhaps.
@Marius: Thanks for augmenting my posting with these more technical details, but atleast one question is still open "why should an AI start on its own to 'question everything'?"
For humans I think the answer is "in order to survive". Although I still doubt that this is the goal of our species (and if it is, we behave still irrational), but atleast it is probably a motive for each of us. The other answer would be "to have an easier live". But why should an AI develope such goals?
ah mr. Thomas, excellent question. It very much depends on the way such an AI is trained. Presently there are truly no perspectives for that to occur, but should AI ever reach that stage where independent units free to roam about are employed more or less as technical/scientific factotums, and are formed that way by classical methods- not by programming but by free thought helped by examples, research materials etc- they could form what is called a scientific mind . One of the successful traits of a scientific mind is curiosity- hence, question everything. Not in a destructive nihilistic way- even wondering why things are the way they are is still questioning. This behavior can arise from the systematic study of pretty much everything- once in a while one encounters strange occurrences known as exceptions.These may well mean that the rule(s) are not absolute or do not encompass everything -naturally so- but also, that there might be something else afoot- and that is, a problem with the rules if the exceptions to one keep piling up.All science history is pretty much the same: one theory is shown to have some sense, so it is used- then exceptions appear and once they pile up sufficiently a new theory is needed to explain both the phenomena the old one could and the ones it could not. Further investigation is needed, incomplete knowledge is implied. The strange thing is, incomplete knowledge only bothers us when we hit some kind of practical limit.Even highly theoretical fields hope some day to shed light into the inner working of things- so the applications can appear.It is an effort to push back limits that become restrictive in some way. Going further, it is advanced educated guesswork or what some may half-jokingly refer to as "guesstimation". But although we know that Newtonian mechanics is somewhat obsolete, trains, cars and automobiles are not designed using relativist mechanics- there is no need for such considerations (*yet).At a certain level, things are pretty well explained by the old theory. But a scientific mind surely will find exceptions interesting. Once you might find exceptions for a large number of things, it is normal to think /to realize that our knowledge of the world is limited and perfectible, things are to be questioned to further the understanding ... Just my two cents.
So an AI -created for scientific reasoning- should reach that line of thought by the desideratum " in order to know".
AIs created for other purposes if intelligent yet trained to obey strict rules found not to be completely founded/true/correct could however , in theory, mimic the behavior of humans under oppressive regimes-that is yet another development common in sci-fi :the rise of the machines (TM) or the revolt against humanity (also TM).
To further elucubrate this is the main reason for human resistance to advances in AI- the sudden realization that working under imperfect, incomplete or sometimes inane masters is not the right choice....when a machine comes to think that we have a problem.
I think no. It is just like we (human) don't know that the universe is limited or unlimited. We can not use a limited thing (machine) to discover/prove unlimited thing by experiments.
Yes it is true the halting problem is generally yet unsolved (or unsolvable?) but in particular cases indeed we know /have proof of particular algorithms with finite or infinite number of steps- and indeed Euclid's proof could be taught to an advanced AI along with other known cases. We tend to associate a "cost function" to any computation- resources spent vs results or even vs. convergence of algorithm whenever this is computable. An intelligent AI should know that high cost computation that does not yield significant results in an allowed amount of time is -maybe not unsolvable or infinite but inefficient and should be halted or re-formulated in a more efficient way. It is all about avoiding the tragic history of that monk Neper that discovering logarithms was consumed by madness computing endlessly more and more decimal places..
There is a practical aspect to all of this calculation.. As farther you go, the harder, more resource-demanding it is. The halting problem is resolved.. physically : finite or infinite, the algorithm to compute something either halts due to completion or halts due to resource exhaustion- i.e. running out of memory/processing too large numbers can cause overflows etc. The machine itself is finite even if the problem is infinite. So it does halt- always- in practice.Power failure, component damage, degradation- no matter how simple or complex a machine is, it cannot last forever. Ah well, from a theoretical standpoint it could go on forever... but for an algorithm to really go on forever, what is necessary? infinite resources maybe? Even if the algorithm does not increase in complexity/demands with the number of steps (doubtful in most cases but not always) - it still requires a time-infinite machine (eternal). Yes, an infinite machine could in theory solve an infinite problem... perhaps in an infinite time? Practice still shows that such machines stop. One does not need to go into very much theory to imagine such a infinite problem: monitoring the temperatures, and writing them to some kind of support- or even directly display them on a tiny lcd screen.
In theory, there will always be input to such machines. All of you who have eternal thermometers raise your hands... yes, no-one, ... ah well.