But I know the results introduced by Gödel etc. these being usually intended for theoretical purposes, like undecidability of some claims and the like.
But I mean some simple problem that any human having mean intelligence can solve easily.
For instance, I know that a Turing machine cannot conclude that the Mona Lisa's smile is enigmatic; but most humans can notice this expression. This kind of procedures are what I am looking for, but, if it is possible, I prefer procedures closely related either to language interpretation or calculus, instead those procedures involving emotions.
Notice, that every problem handled by computers can be split into bits, as every geometrical problem can be split into points. The impossibility could be consequence of the problem nature, for instance, if it is not possible to split it into bits.
So far, correctly reconstructing neuronal arbors and identifying synapses--neural circuit reconstruction--from electron microscopy image volumes has escaped all attempts. All current attempts fail, and were any attempts to achieve 99.999 % correctness in pixel classification, the neural circuitry would still be wrong because of ambiguities in the data that can only be resolved at an abstract object level where prior knowledge of the anatomy could be used to base decisions upon.
Perhaps in the future a satisfactory approach will be developed. At the moment, massive human proofreading is required, which prevents reconstruction of even smallish neural circuits contained within a cube of 100 microns on the side.
I would remember that a Turing machine is an algebraic construction, that is, an structured set or construct. I'm talking about Turing machines. I'm not talking about actual computers, much less neural networks or the like.
In addition, notice that I am not talking about undecidable problems, but those decidable problems that a human can solve and Turing machines do not.
Dr Juan-Esteban : “I am not talking about undecidable problems, but those decidable problems that a human can solve and Turing machines do not.”
In fact, Dr Juan-Esteban, according to Turing’s thesis, any process that can be naturally called an effective procedure is realized by a Turing machine while “effective procedure” is strongly related to decidability. Would the question be about a kind of Turing machine antithesis?
I would like to avoid any reference to logic, undecidability and the like, which is rather a slippy subject. To illustrate my question, consider that a general algebraic equation of 5th degree cannot be solved by means of radicals and this is not an undecidable question, but the impossibility has been proved by Gauss.
I have pointed out in previous messages, that any problem handled by a Turing machine must be split into bits. It thus follows, that if a problem cannot be split into bits, then no Turing machine can solve it. For instance, a picture can be split into pixels, therefore into bits. Are there some picture properties that cannot be detected through pixels? If so, obviously noticing such a kind of properties is a task that a Turing machine cannot perform. For instance, to detect some kind of expression, like sadness, that a human can notice easily. If you do not agree with me, try performing a computer program in order to detect sadness in pictures or snapshots....
In natural languages there are also sentences the meaning of which is different from what it is induced by their parts. For instance, when ironically speaking the listener can understand the converse meaning from what the sentence means without this being an idiom. Indeed, a human can notice ironies. For instance, consider the following sentence.
"Waste your money and you become richer."
Suppose that this sentence is said by a men while observing how one of his friends is wasting money in worthless things. Indeed, any human interprets the sentence as an irony and assigns the opposite meaning to it. Now, I repeat similar question: Can a Turing machine interpret sentences ironically?
Roughly speaking, I think that some emergent properties that only can be noticed in pictures or sentences when they are regarded as a whole, not always it is possible to be detected by an automaton. This is my question or, at least, a particular case of it.
What you say about splitting problems into bits strongly relates your question to knowledge representation. Your question could be reformulated as : Are there problems or properties or pieces of knowledge that we cannot represent in a computer? In other words, are there problems that cannot be formalized?
Information retrieval seems to be a good candidate. The problem is: given an information need expressed as a natural language sentence, find the documents that are the most relevant. First we don't know if the sentence correctly expresses the information need of the user. We probably have to personally know the user to fully understand his need. Then, we don't even known what "understanding a sentence" means. Finally, there is no formal definition of the relevance of a document with respect to a query.
Nevertheless we perform information retrieval everyday, with the help of web search engines of course.
As for "emergent properties that only can be noticed in pictures or sentences when they are regarded as a whole". This is probably not a sufficient criteria for Turing machine impossibility. With machine learning techniques and large sets of examples one can train neural networks or support vector machines to detect these properties. By the way, this is an active research area : http://www.sciencedirect.com/science/article/pii/S0893608005000407
Recall that the references to natural language, picture representations, computers or splitting information into bits are only illustrations to my question. In addition, the term formalization is too ambiguous, unless you can prove that it is not possible to create new formalizations different from the known actual ones.
Likewise, the link you have recommended deals with emotions. Emotions, can be formalized through Kleisli categories. For instance, If M denotes an emotion set, it is a known fact that the endofunctor in the category of ordinary sets and maps defined by X -> XxM, induces a monad the corresponding Kleisli category is equivalent to a category of sets with fuzzy subsets; whenever M is structured as a monoid. In this set extension morphisms are map-pairs (f,g):X -> XxM sending each element x into a pair (f(x), g(x)); where g(x) denotes an emotion associated to f(x).
Accordingly, formalizing emotions is a possible task; but requires the action of a human brain. Indeed, every problem that a human can solve, it is possible to be solved by a human together with a machine. If in my initial question I have talked about problems that a human having a mean intelligence can solve and a Turing machine does not, then any formalization in order to solve the problem must be performed by the machine itself.
Nevertheless, revisiting the illustration I have exposed before, consider that a two year old child can notice sadness in the expression of his parents, even a dog can detect sadness in the expression of his owner. In both cases, the agent knows nothing about formalizations. In addition, as I have stated above, emotions can be formalized, but this is not the question, The question is if they can be "detected" by a machine whatever the underlying formalization could be. If I have implicitly mentioned formalizations has been in order to lead the question to the proper interpretation avoiding any misunderstanding due to excessive ambiguity.
In other words, the question consists of knowing whether or not every action that a human can do can be carried out by handling symbols according to a substitution machinery. The answer is yes if no limit is stated.
For instance, consider this example. Let E be the set of all possible human faces and expressions each of which together with a word denoting either sadness or happiness and so on.
Let S be a searching algorithm. If the input is a picture of a human face, the searching algorithm can find up this face in its storage system and then it can retrieve the word denoting the corresponding emotion. Notice, that this machine is of no use because of the infinity of the information set that S has to handle.
A human can do this task with a finite datum set. In fact, a machine consisting of a storage system together with an identity-based searching algorithm can answer any question that a human can solve, provided that its storage system contains all possible inputs and outputs., but in this case the human can generate the whole informations set from a subset. The generating system concept belongs to the scope of categorical algebra and this is why I have focussed the question on its algebraic view-point
We had an interesting lecture by Pr. Hava Siegelmann on "super-turing" machines in ICANN'12 conference. She presented some models that can't be carried out by turing machine, such as evolving recurrent neural networks.
I dont know if it might help you, but a starting point might be her book:
Thanks Dr Mamalet: when I first asked if the initial question was about a kind of antithesis to Turing thesis, I had Siegelmann’s work in mind. Meanwhile, I realized that Turing Thesis may not be falsified by supercomputability since super-computability is not algorithmic in the usual sense. Please see http://www.mbph.de/Logic/BeyondTuringLimit.pdf for such an opinion. (BTW, here’s a link to have a brief idea on Siegelmann’s work http://binds.cs.umass.edu/papers/1995_Siegelmann_Science.pdf )
optimum computation of human mind .. the optimal behaviour of mind is still a big confusion that wat exactly is meant by that .. i think the mind hasn't been used to its max. uptil now and no one can claim its boundaries that how vast it is in real. Approaches like UTM or Hypercomputation are still not able to achieve the optimality of mind. Dreaming phenomena, achieving more than 3-dimensions like 11th dimension are some of those aspects that are very complex to be carried out using TM.
Starting with the initial question, whether there is "some procedure that any human can perform easily, but that is impossible to be carried out by a Turing machine", this requires in my understanding a COMPARISON between HUMAN BEHAVIOR and TM BEHAVIOR.
(1) Perhaps it is important to distinguish the fact, that the TM exists for us only as a FORMAL (mathematical) CONCEPT compared to humans as REAL ENTITIES.
(2) Within the formal apparatus the TM-concept has some 'syntactical' meaning without a direct relationship to anything real (but we can establish one, if we want, e.g. to a 'real' computer).
(3) Human persons as real entities are as such – from a scientific point of view -- completely 'unknown'.
(4) To compare something UNKNOWN with a purely FORMAL concept requires a 'translation' of the unknown real human persons in some FORMAL descriptions of human persons as CONCEPT-HUMAN which can be COMPARED to the formal description of a TM given as CONCEPT-TM.
(5) As far as I know we have mainly three sources of knowledge about human persons: (i) Individual SUBJECTIVE experiences; (ii) commonly OBSERVABLE behavior; (iii) commonly OBSERVABLE neural activities.
(6) With regard to the formalization of subjective experience (perhaps as some kind of PHENOMENOLOGICAL THEORY) I do not know any such theory until today, which has some acceptance (yes, there are many publications...)
(7) With regard to observable behavior there are numerous psychological (and others) attempts to do this, but even there I cannot see any really mature and widely accepted theory. Moreover, to give the observable behavior some MEANING it is necessary to relate it minimally to INTERNAL STATES of the system.
(8) Internal states are mainly the subject of physiology and therein – as a subset – the neuro-sciences. Despite big successes the last years we are far away from a complete theory of the inner states of human persons. Especially we are lacking a formal account which can be classified as a formal theory in the sense of CONCEPT-BODY as part of CONCEPT-HUMAN. For the theory of the inner states it is of importance too to have a minimally relationship to observable behavior; otherwise are the neural activities 'without meaning'.
(9) For me a complete formal concept CONCEPT-HUMAN would include the partial formal concepts CONCEPT-BEHAVIOR, CONCEPT-INTERNAL STATES as well as CONCEPT-CONSCIOUSNESS, with necessary CORRELATIONS between all three.
(10) To my knowledge we are very, very far from this. Thus I see no chance to seriously compare a – not existing – formal concept CONCEPT-HUMAN with an – existing – formal concept CONCEPT TM.
(11) The cited papers from Siegel and Bremer are – in my understanding – very clear and stimulating. But perhaps there is an additional view to the problem of computability.
(12) Nobody can really tell us – in a formalized manner – what a MIND is. But there are strong hints that the mind we can experience subjectively is associated with the BRAIN in a BODY. If we assume – perhaps a simplification – that the MIND is a function of the BRAIN-in-BODY, then we can substitute the comparison between MIND and a TM with a formal concept of the brain CONCEPT-BRAIN. Nobody has until now a complete formal concept CONCEPT-BRAIN. But we have many different kinds of formal models of ARTIFICIAL NEURAL NETWORKS ANNs, which model at least some aspects of the brain. All these ANNs in use can be simulated on an ordinary computer. Any ordinary computer is an INSTANCE of the formal concept CONCEPT-TM. This is no prove, but a hint either of the power of an TM or the limits of the finite brain.
(13) As Bremer mentions in his paper there are numerous mathematical objects which can not be computed by a TM, but we humans with our brain can 'do' operate with mathematical objects. Is this a contradiction? It must not be, because to describe some formal object in a formal manner as 'infinite' dos no imply that the object is 'really' infinite (example of the definition of the real numbers as 'countably infinite' by using finite means to define something 'infinite'). I would suggest – in the contrary – that if the finite brain can 'think' of 'formally infinite' objects and a TM can simulate all known formal models of the brain so far then – I suggest – that the TM can – formally!!! -- compute such – formally -- infinite objects too.
To define functions that cannot be computed neither by a human nor a machine is a very easy task for any human that knows something about algebra and logics.
This is not the question that I have had in mind when initiate this thread. Nevertheless, now I am aware that the title is very ambiguous. This circumstance has given rise to some answers that do not fit into the question that a have in mind. Several times I have exposed illustrations for my aim together with some examples like the fact that any human can notice sadness in a picture and, as far I know, a machine cannot.
I apologize for the title ambiguity and I will try to open new topic with more accurate title.
In any case, I appreciate all message of all researchers.
I assume the question is for a reasoning procedure that a person could perform easily, but that a Touring Machine could not do. A human can turn starch into sugar,
juggle, pull out clumps of its own hair, and sense its internal and external environment. I don't think that these are the types of procedure that are being asked about.
A person, knowledgeable in the subject, could easily determine that a TM could not be guaranteed to solve an arbitrary problem in finite time. Could a TM solve that?
Understanding that what a Turing machine can carry out is essentially what a computer (better an expandable computer network) together with a C program can carry out, an approximative answer to he original question is: writing a C program that defines a useful programming language such as Ruby or Java. As is well known the design of such a language involves many decisions which depend on personal preferences and nevertheless has to satisfy many very hard requirements. Of course the proposed task cannot really be solved by 'any human easily' but it was solved by at least some humans.
You posed the restriction "Obviously, I am asking for some procedure that any human can perform easily. ". Otherwise, I would have answered "Quantum algorithms". As far as I unterstand, these are not exectuable on a Turing machine since it was not designed to handle quantum bits.
@Herbert: Any quantum-mechanical process (in particular one that can happen in a quantum computer) can be simulated by methods of classical numerical mathematics (essentially solution of the time-dependent Schroedinger equation for the deterministic part and call to numerical pseudo-random generators for the so-called R-process). This then results in a normal simulation program that can run on a normal classical computer and thus in principle on a Turing machine. The only spectacular promise of quantum computers is efficiency.
If I want find some procedure that a human brain can do and Turing machines cannot, I would pay attention on those actions that machines do not perform. Turing machines handle symbols blindly; ignoring their meanings. This is the key.
To illustrate this topic, let us consider the three symbol sequences below.
1) The film I saw yesterday is very funny.
2) The closure of a set A is the complement of the exterior of A.
3) No anteprotocustic baetorilenedo farcitteritumoulus.
If someone with no college reads these symbol sequences, likely he can remember the first one for hours. The second one hardly could be remembered a few minutes after. The third one requires a great effort to be remembered. Why?
Because the first symbol sequence is easy to understand. The second one can be only understood by a mathematician. The third symbol sequence is not understandable. Only understandable objects are easy to be remembered. This fact means that human brain stores meanings in his memory, while in order to store symbols some complementary effort is required.
How can the human brain transform symbol sequences into meanings? This is a very complex procedure that Turing machines cannot perform, because they handle symbols blindly. But the machinery that human brain possesses in order to transform symbol sequences into meaning, and the nature of mental meaning representations are far from being understood. In addition, there are interactions between the things stored in human memory. Our mind states associations among them, this being based upon meaning analogies. Without handling meanings and meaning analogies, such associations cannot take place.
@Juan-Esteban: It was a pleasure to read your essay. As I see it, you are right and wrong in an interesting way, but more wrong!
You say 'Turing machines handle symbols blindly'; I would say 'Turing machines handle symbols in a pre-determined way' (I take 'Turing machine' only as a metaphor for a state of the art computer together with a program and input data (e.g. a list of readable text files. Actually it is funny to observe the coerciveness with which the word 'machine' in this context obsures for most people the fact that the essential part of the Turing machine is the program which is hidden as a name-less state transition function.)) The program could well ask to inspect dictionaries and knowledge bases (all input data of our program) to obtain formal representations of the meaning of text components and their entirety. This would parallel the experiences stored in the human brain with which we compare our computer program. The program could also output reformulations or variations of what it got to read. Depending on its internal state (or interactive input), it could write in a dry style or a humerous one.
I don't know what's the status of professional programs for knowledge representation and inference, but I remember that years ago, under the name of 'expert systems', these where considered the most advanced and useful out-comes of research on 'artificial intelligence'. Therefore I assume that one could already find impressive examples of programs that work along the lines drawn above.
From time to time I think about doing some experiments myself; a folder yet exists named 'TAO' meaning 'thought as objects' (of course, referring to object oriented programming).
To conclude: Perfect simulation of human understanding should be possible if the simulating program contains a coded version of human experience.
Notice that for "meaning" you understand a "formal representation of the meaning…."
Is again a formal representation a symbol sequence? A dictionary, does not assign any meaning to any word. A dictionary changes a definition by other more explicit. For instance, consider the word square. If a dictionary assigns the meaning "geometric figure with four equal sides " this is no a meaning. To understand this symbol sequence you must know the meaning of "figure", "equals", "four", "side" etc. Assigning a meaning to the word "square" consists of assigning this figure ⧠.
To teach the meaning of the word square to a man who knows no word in English, the dictionary definition is useless; however the analogic figure ⧠ works fine.
If you observe the snapshot accompanying my messages, the meaning that you can assign to my name is this snapshot together with a lot of additional information, like human being, male, old, etc.
When I have used the term "meaning", I am talking about analogical representations or something like. In fact, I have defined this topic in my paper
In which, meanings are built through a Kleisly category being equivalent to a certain category of sets with fuzzy subsets.
Of course a computer can assign the figure ⧠ to the word square, but this is a trivial example.
In any case, Turing machine is a well defined algebraic construction, and when I say Turing machine you must not understand neither a machine that will be invented in the future, nor a machine described in science fiction literature.
Article CONCEPTUAL SYSTEMS, CONCEPTUAL CONVERGENCE THEORY AND ALGEBR...
Much of this discussion brings to mind the original 1980 Searle 'Chinese Room' paper, and the many responses in BBS. Searle's point was very similar, although described differently, the 'Chinese Room' was effectively a way of thinking about a Turing machine, with a person inside following simple but precisely algorithmic rules. Searle questioned that this could 'do' meaning.
There have been many many responses. Sticking to the mathematical aspects, Roger Penrose's was closest -- he effectively argued that mathematicians had a kind of insight that could not be computed on a Turing machine. However, he proposed that quantum effects in microtubules provided the extra. It seemed to me, even at the time, that this was weak, simply because as you said earlier, quantum computing as described buys speed but not much else. So he could not prove the argument, merely claim it.
Perhaps the most persuasive response for your case would be to point to the true deficiency of a Turing machine - it's use of symbols which relate to real world concepts, where that relation is not possible because the only way symbols get into a Turing machine is when, even indirectly, they are put there by a programmer. Stevan Harnad's 'robot' response took this line. There are other deficiencies which are similar, but in all cases I know of, without unfairly restricting what counts as a Turing machine, you gain no power for the argument.
Personally, I'd say that it's probably impossible to 'prove' that a human can do something that a Turing machine cannot. Humans aren't computational, essentially, although some of their behaviour can be described computationally, but necessarily accurately. So any discrepancy could be due to the computational description, not the true human behaviour. A Turing machine is an abstract computational entity, a human is a real biological/psychological/social one. Conflating the two is very problematic.
Sorry for the long response, but one small point might be interesting. Turing machines were modelled on what human 'computers' did in the first half of the 20th Century. So, in a sense, a Turing machine is emulating a real human behaviour, although a specific and restricted one. Again, attempting to prove that a Turing machine cannot do something that a human can, really doesn't tell us very much, since it is a specific and restricted type of behaviour we are allowing it to do.
First, take into account that my question is not restricted to computation.
Not only a Turing machine can solve any problem, a simple two column hash table together with a searching algorithm can solve any problem too. To see this fact, consider a table T such that in the first column contains all possible questions in some subject, and in the same row, but in the second column contains the corresponding answer-solution. If you ask such a system any question, the searching algorithm finds it up in the first column and retrieves the corresponding answer-solution from the second one.
The inconvenient is that such a system is only adequate for small tables, otherwise the searching procedure could be too slow. Usually a Turing machine is a procedure to generate T by means of a small table of associations.
In any case, in spite of solving any problem, do you consider such a table T to be an intelligent system?
The only intelligent action is what programmers do, when synthesize the Table T in order to create a generating system for T, by means of a Turing machine. Without understanding the meaning assigned to the symbols in its memory or tape, neither a machine nor a human can create the way of generating T. This programer task requires to handle analogical representation, to know the logic and properties of the involved objects and, mainly, to invent, to dream. Summarizing with a table and a searching algorithm you can solve any problem, but the intelligence is in the table creator and not in the table itself.
That's a very useful clarification. Obviously a table T isn't an intelligent system, because we know how it works -- naively. That was the point in the Turing test -- to hide the implementation so observers can only conclude from the behaviour. Searle's argument broke the rule by revealing the ghost in the machine. I'd also table T is much much bigger, as each question should also have to include all the history of the previous questions, so that conversational responses are coherent as well as simple answers.
But I'd say that any attempt to program a system to mimic intelligent behaviour is not in itself intelligent. The system itself has to have its own history, grounded in the same world as a human. Ruth Garrett Millikan's work in biosemantics took this line. She argued, for example, that an evolved being and a constructed being -- even when physically identical -- would differ in the meanings they bear. That's a very appealing argument. To address this, a Turing machine would really have to be embodied for a substantial time, and to adapt significantly over that time, to be comparable to a human. After a time like this, two identically programmed Turing machines would conceptually differ in their semantics because of the different histories behind the programming. Only after an extended period like that would the system cease to be 'programmed' if you like, and the intelligence be part of the system rather than due to an original programmer.
However, this just refines the main point I was making, that human intelligent behaviour is fundamentally not computational. We recognize intelligence when we see it, not define it.
@Juan-Esteban: Your hash table example is based on a misconception of what a Turing machine is defined to be: It is 'potentially infinite' since its tape will be elongated whenever the read-write head comes to the end of the tape.
By contrast, your table T is necessarily finite, and there is no natural rule for extending it if you came to its end.
An even deeper misconception is that a Turing machine 'can solve any problem'.
I read a part of your paper and enjoyed your characterization of intelligence as something capable of generating (inventing) algorithms. Indeed, this idea allows me to reformulate my initial answer to your initial question: creating a C program which creates meaningful algorithms (which were not programmed-in). Of course, the problem is with 'meaningful'. Taking some formal definition of 'algorithm' it is not difficult to generate a list of algorithmsby means of a C program. To have any in the list which a mathematician would consider useful would be a piece of luck.
When your text becomes more technical it becomes indigestible for me: knowing 'free monoids' and 'partial monoids' doesn't help me to make sense of 'partial free-monoids'. Defining languages in a way that nothing is said about grammar can't probably capture essential functions. When 'thin categories' come in, a less academic formalism would presumably also do the job. ...
A Turing machine cannot guide a robot across a terrain with obstacles. This is for the same reason that while a TM can solve a maze, given a representation of the maze, it cannot solve a maze given only an interior view of the maze such as a rat or human would see. The systems that drive cars aren't equivalent to TMs, but rather equivalent to TMs augmented by tapes that retain their contents after each I/O pair, i.e., after each TM computation. TMs without that extension start with tape containing only the most recent input.
"Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem.
Beyond the Turing Machine: The Quantum - Classical Poised Realm of Open Quantum Systems, Non-Algorithmic Trans-Turing Systems, Implications for Mind and Brain
Strong Artificial Intelligence, the view that mind must be a complex Turing machine, or network of formal logical gates. The Turing Machine defines "algorithmic".
Hidden in the above framework of ideas are two concepts: 1) knowledge is propositional and representational, as by bits in a Turing Machine or McCulloch Pitts network. 2) All knowledge is so representable. However, the famous FRAME problem, based on these assumptions, has plagued computer science since Turing, never solved.
Critically, humans and dogs solve the frame problem, often easily. Dogs presumably do so without propositional knowledge. These facts suggest that the entire framework above is wonderful, but very limited despite its evident power.
=====
… have defined a non-algorithmic Trans-Turing system comprised of quantum superposition states, pure states, and classical degrees of freedom in the Poised Realm. This system is neither deterministic due to quantum superpostion states and quantum measurements in the Trans-Turing system, nor is it random, due to the classical degrees of freedom. Thus it is an entirely new, non-algorithmic class of "information processing systems", where we will need to redefine "information" for the Poised Realm.
======
I (Kauffman) believe Trans-Turing Systems are constructable, go beyond Turing, and may have powerful implications for the Mind-Brain system. Decoherence allows "mind" to act ACAUSALLY on brain, answering Descartes' problematic dualism of Res Cogitans and Res Extensa, itself bound by classical physics and the causal closure of classical physics. It is possible, and testable by diverse approaches, that Trans-Turing systems are located in synapses and post synaptic neurotransmitter receptors, and that quantum entanglement plays a role in the Binding problem of neuroscience. Just conceivably, conscious experience is associated with quantum measurement, again a testable hypothesis.
@Ulrich If quantum computing could be implemented by a classical Turing machine, why do then exist special quantum Turing machines? Some of these include irreversible transition functions and allow the representation of quantum measurements without classical outcomes according to
It could perhaps be that non-classical (!), probabilistic Turing machines might bridge the gap.
Also, the answer of Louis seems to point in a similar direction as my original answer that did relate to the classical Turing machine.
@Juan Esteban: Perhaps the kind of Turing machine that you are asking about should be clarified. Even wikipedia knows of quite a number of different kind of Turing machine.
@Herbert: Quantum Turing machines were invented (on 'paper', not built) to make use of quantum dynamics for implementing program steps. Since actual machines start to use quantum dynamics, it is an logical desire to have direct (as opposed to one that goes back to classical physics) mental model for such machines. However, I listened to a few talks of people who thought along the lines of your question and were not aware of the potential of classical simulation.
Of course a simulation of a quantum system on a classical computer needs a random generator for extracting measurement results from wave functions (my practical experience is only with quantum measurements w i t h classical outcome, whatever the alternative you mention may be). The deterministic algorithmic 'pseudo-random' generators in use in all programming systems (C,C++,Java,Ruby, Python, Mathematica, ... ) give no reason to separate them from 'true random' generators (which is an undefined and useless concept).
To use your words: classical Turing machines simulate (for all practical purposes perfectly) probabilistic Turing machines (thus rendering superfluous the latter).
@William - what makes natural language "instinctive" to a human? It is learned, just like concepts are. There are algorithms, capable of being implemented on a Turing machine, that form concepts -- and which do so in a way that are psychologically and neurally plausible. Obviously, it's sufficiently complicated that a Turing machine is not a useful way to actually do this, but in principle it is possible. Even Searle's Chinese Room argument accepts that a Turing machine is capable -- in theory -- of behaving in a way that is indistinguishable from discourse (i.e., passing the Turing test). His argument was that the semantics were wrong. If that is your argument, there are sharper ways of making it, with reference to Stevan Harnad's response, for example (that human concepts depend on physical embodiment). As to the issue of abductive reasoning, there are algorithms which implement it, e.g., ACL, and -- as algorithms -- these are capable of Turing machine implementation. Not with a blank tape, obviously, but comparing a tabula rasa TM with a human might not be a strong argument.
@William - I think you're being too anthropocentric. A huge part of Turing's work was to question that very assumption. Chickens 'instinctively' learn the ability to communicate with other chickens much better than people do. What people truly do excel at, is meta-representation and meta-level reasoning, as you correctly say. But this is built not on a blank tape, but on a billion years of history, mostly but not entirely biological. I'm not saying these don't matter -- they do (as I've hinted before, Millikan's work on biosemantics is sound on this). But you can't use a blank tape as an equivalent: that's like comparing an empty space with a human. Whatever life started as, it's now a highly complex mix of biological and social stuff, even in animals like chickens, even in algae for that matter. The body:mind::hardware:software metaphor is so badly wrong I don't know where to begin. You have much better arguments at your disposal.
it is a very challenging question. Although the first thoughts may leads to the Halting Problem it does not answer your specific question. If I understand right, you specifically want to know a mathematical problem which _computers_ aren't able to solve but _humans_ can.
If there would be an exact answer already it would finally solve many issues and discussions about of human nature vs. computer nature. Currently there is no such _provable_ construct.
As for Halting Problem: There is no proof that humans are able to determine the termination of any program. (Just for some; and unendless many in the same equivalence class).
(Background: Although this question is loaded with belief that about determinism, I must admit, my own belief is that humans are probably more than machines.)
In my experience there is one particular thing, which humans _probably_ can do and computer programs can't at all: _Finding/Determining loop-invariants in Formal semantics to prove correctness (_not termination_) of a function. The additional proof of Termination is required to tell that a program works according to specification (Model Checking).
One weakness in that argument is of course, that as soon as a human can give a particular loop invariant so can a computer simulation include this in its knowledge base (and use this loop-invariant to check every program in the same equivalence class).
But the main question remains: can a human find a loop-invariant for _any_ given program?
To conclude and answer your question directly: From my experience and knowledge, there is at the moment _no known problem_ that humans can solve _easily_ and computers don't solve _at all_.