Thanks Mr. Wang for raising this kind of discussion. In my opinion the study of AI is only an effort to find the answer of this question. Though there are many areas where brain is still much more advanced than the current computers, two major areas I personally focus are vision and decision making in typical uncertain conditions. Though we have traversed a long distance from probability theory to type-n fuzzy set theory to deal with uncertainties, still we are far from the accuracy actually provided by domain experts in time of decision making under uncertain conditions. Wish to here from you and other experts about the same...
I think the numerous differences between brains and computers would require several levels of abstraction in order to describe and fully appreciate them. As this is beyond the scope of this answer, I just want to point out some aspects that range across those levels to provide an idea of what I see as the most prominant differences.
- Whereas integrated circuits (ICs) rely on only few types of components (mainly transistors and their traces) the brain uses a wide variety of highly interwoven mechanisms. Most prominently are the numerous types of neurons, but there are many other mechanisms like neuromodulators, ephatic interactions, or gap junctions.
- Wheras the mechanisms of ICs are designed to be highly reliable (and are required to be) the mechanisms of the brain are rather unreliable and noisy. In addition, the brain exhibits a "graceful degradation" where it can still operate sufficiently well even when parts of the mechanisms are damaged.
- Mechanisms of ICs are typically designed to serve just one purpose and and their design is commonly modular and free of side-effects. In contrast, the circuits in the brain were designed by an evolutionary process that utilizes side-effects where it can and often uses one circuitry for many purposes at once. An amazing example of such multi-purpose, side-effect-laden circuit is the stomatogastric ganglion in crustaceans.
- Flexibility of ICs is limited and typically implemented on the software side, e.g., during sensor processing the sensor data "flows through" the IC without modifying it. All state modifications in ICs are typically confined to areas designated as "memory" (registers, cache, ram, etc.). The circuitry itself is rather static. In brains this is fundamentally different. Every flow of information also modifies the circuitry - in every place. This has an effect on multiple levels. It does not only shape the circuitry between neurons, but it typically also changes the way information can flow withing a neuron.
- We design our software to produce an output that is interpreted by us humans. For example, sensor processing is often designed as a pipeline with raw sensor data flowing in on one end and more high-level, aggregated information flowing out on the other end. This is fundamentally different in the brain. There is no cartesian theatre in the brain where processed, high-level information pops out of a pipeline and is interpreted at some kind of central core. It is rather a highly non-linear, distributed network that serves a vast amount of simultaneous purposes derived from a long, evolutionary history of the particular organism. Admittedly, the latter description is rather vague, but that's the best I can come up with right now ;-)
I would highly recommend Christof Koch's "Biophysics of Computation" in this context.
I believe that the main difference between brain and computer based on the globally parallel processing of the information in the brain. Brain and mind are parallel in theirs basic functional organization (regardless of what the hypothetical structures do that functions), and sequential treatment requires special self-control. This is the source of both strengths and weaknesses of the brain compared with computer (depending on the type of the task). Often parallel processing prevents the necessary sequential processing (brain tends to be distracted :-)
Furthermore, the global parallelism means that the mind each time working with all possible reflections of the situation, including mutually exclusive reflections. Directly or indirectly, through the associative links, the whole experience and knowledge of the man are involved into reflecting of every situation in every moment. To simulate this life of the mind the computer should handle parallel and simultaneously with all the information available at any given time. As far as I know, no one computer may do that now.
In other words, if we are aware only one image, for example, the apparent presence of the object X, it means that we just are not aware of an infinite number of other actual images involved in the reflection of the situation in our mind (the absence of an object X , the presence of an object Z, etc). Those images also exist at this moment in our mind, but theirs level of energy is below the threshold of awareness, they are very weak, and this weakness reflects the low probability of the corresponding objects (more precisely, energy-weakness of images corresponds to subjective probability of objects that may more or less correctly reflect objective probability of objects). If, however, the energy of other images will be higher, we will aware of these images as composing uncertain conditions that may lead us to uncertain decisions.
That global parallelism is impossible for the computer now, and it makes it weaker than person in uncertain conditions. But the same limitations of processed information makes the computer more effective in fully defined, but very complex problems that requires taking into account many certain factors.