Computational Intelligence is the hot field of research in AI, however, more needs to be done about Artificial Consciousness. An interdisciplinary approach can make this possible.
I agree with you. Artificial consciousness is an interdisciplinary topic. Some philosophy arguments limit the research. I wonder if there are some research group crossing Philosophy, Psychology and Computer Science.
The biological consciousness and moreso the intelligence and that of the human grade is asking for the stars... we have achieved this in a span of whole 2 bn years to be precise!! The computation and electronics are very late developments..but yes, some gade of perfection could be achieved and that cannot happen in cave labs.. inerdisciplinary work is the only way out.. very intresting thing of the future. Intrestingly, I am a bloke who being student of maths, physics and biology sometime and today a faculty of medical discipline that too neurosciences..I wish to lend hand in a good way. Do come back sometimes and let me hear your proposition..
I must I am also a bit disappointed that not much research is being done in this direction. However, it is indeed a very difficult problem and we are still too ignorant to actually build such computational systems.
We probably even lack the overall computational power needed to simulate a brain in its entirety, assuming that this would be necessary, which is not a given. After all, artificial intelligence and artificial consciousness need not be similar to their organic equivalent, there might be some big differences involved.
There is also a philosophical question of whether we can actually succeed in building an intelligence that exceeds our own. It might just as well be that such artificial beings would need to evolve in silico instead of being created directly by researchers. Maybe we need to just 'set the stage' and let the evolution do its thing. It is a slow process, though.
In practice, though - there is no guarantee that any such approach would succeed - and this is one of the main reasons why people don't do this so much anymore. Also, it is not a one-person-effort, as many faculties of mind need to be properly designed.
What we are all working on right now is analyzing some special cases of information processing and learning / decision making. We are trying to build efficient and effective specialized tools - as this is then applicable to solving real-world problems.
The bottom line is - focusing on special cases brings you funding and a paycheck at the end of the month. Dreaming of something much more ambitious... doesn't. It is the sad reality of the society we live in, but there is some sense and meaning to it. Intelligent systems that are being built help improve the quality of life of many people around the world - and even help save lives in some cases. As this is something we can actually do at this moment, working on such problems is highly pragmatic.
However, as You've said, we should never lose the higher perspective and should still strive towards the more ambitious goals.
Consciousness cannot be modelled by logic alone. Artificial Intelligence is in the embryo stage as consciousness is not fully understood by neuro-scientists and psychologists. Therefore, it will be difficult to simulate it.
A lot of funding is directed towards development of AI applications for Control, Pattern Recognition and Optimisation. Funding bodies are interested in applications that give results and enhance the current technologies. A lot has to be done in this interdisciplinary area and some departments should get together and build research hubs that focus on AI and Cybernetics so that Computer Scientists, Neuro-Scientists, Philosophers and Psychologists can get together and talk about the actual problems. Currently, there is a large research gap and researchers are doing things within their groups and there is a lot of inbreeding of ideas.
We may be able to better understand about the nature of human consciousness when we take steps to computationally model it, and in the process, we can build applications that improve humanity. We need more scientific and theoretical foundations about the nature of consciousness to replicate it computationally.
Nenad, you got one of the points. Because of the limitation of philosophy theory, we can not easily to get funding to complete the computational model. When we talk about the research of machine consciousness, most of researchers believe it's a joke. Actually, we are not really so far from the computational model as we believed.
Philosophy of every mind is different with the psychology. So it will limit the machine learning to only mind. What you can do is to create a data set using the both techniques. Greater the data set, more the learning of the machine. But it will involve computation. The machine will always compute first to understand the terms.
Consciousness cannot be only expressed with data and learning. We make certain decisions when no data is present. The intuitive side of consciousness is a mystery.
We also have to note that we are born with some talents and for some activities, no training is required or training does not even help. In the case of those with creative abilities in Art and Poetry, some people are born with these talents while others try to train themselves and at times training does not help. The bigger questions are about the nature of our reality and the mind - body problem.
They look at the world by their eyes. The artistic talents are abilities to process the data they get from the sight. We call them talents, but it doesn't mean the abilities are existing when they are born. When they are born there may be some special conditions which can help them to generate the abilities.
We can start progress toward Machine Consciousness at the same place that progress toward Biological Consciousness started.
Let's consider the first mental processes that occurred in biological systems and work on generating the same processes in the Machine world. I want to be clear that I do not mean to simulate these processes but to have them actually happen.
Since this forum does not truly support 'conversation', I will continue as if this were an answer. Don't mistake this for any level of certainty on my part. It is just a hypothesis.
Suppose that "awareness of the environment" occurred at the cellular level. We can certainly build a circuit that is sensitive to its environment. But the circuit would need to provide some action in response that shows its awareness.
We could build a circuit with a power supply that gets turned off if the circuit fails to adjust its output level at a chosen frequency. By varying the frequency chosen we should be able to determine if the circuit has a desire to remain powered on.
This would be a fundamental mental feature that could be demonstrated.
From this simple starting point, development toward increasingly complex mental functions could proceed much more quickly than biological evolution.
It would be as if biological mental processes have reached the point where they can begin to bootstrap the machine mental processes.
Awareness is a feed-forward loop between the event and it's logging in the brain.
To have human-like awareness we need a logging function that logs a parallel structure in a pseudo-serial manner. This is not difficult, but by itself has no ability to imbue the log with meaning. A second feedforward loop is needed for this.
Just trying to ‘think out loud’ here, as a neophyte on consciousness (with a little exposure to early Dan Dennett and late Christoph Koch, with a little Steven Pinker in between). AI can certainly replicate rheostat-like behavioral feedback loops based on sensors of the environment, logic gates and wiring of simple responses. So AI can at least mimic, let’s say, light seeking behavior of a moth or the hunter and prey dance of a white blood cell and a bacillus (see: http://www.youtube.com/watch?v=JnlULOjUhSQ). But consciousness in humans and (at least let’s say) advanced mammals is some sort of ‘feeling’ about perceiving the environment (physical, virtual, relational…) and a mental? sensual? response to such feeling that mediates certain behavior as a result of such response. Without that mediating ‘dashboard’ sensation, I don’t know that a behavior is conscious. Koch describes correlates of consciousness associated with deliberate actions (complex, differentiated yet integrated bioelectrical patterns of neuronal firing broadly across various sections of the brain, metaphorically evocative of the ringing harmonics of a bell) in contrast to much simpler, local firings associated with ‘zombie’ behavior, where the subject wasn’t aware of taking such actions. (It will be fascinating to see if there are degrees of integration in the neuronal activity of other animals, e.g., when a chimp is optimizing a branch to fish for termites, or when a cat is stalking a bird.) We don’t yet know how to solve the ‘hard’ problem of consciousness – theories range from Penrosian quantum mechanical explanations to a Kochian sense that consciousness may be a fundamental emerging property of special complex information networks to Pinker’s suggestion that concepts like consciousness or free will may not be able to be ‘computed’ by the human brain, which evolved to solve certain survival problems unrelated to philosophizing about such concepts or mentally manipulating 11 dimensional objects. I’m partial to an evolutionary context for consciousness: a sophisticated tool that emerged as a powerful (but resource-hogging) way to address certain high priority matters. Degrees of native consciousness can be ‘like’ degrees of consciousness while under degrees of sedation: some behavior is performed unthinkingly (‘unconsciously’) but in response to information about the environment collected by ‘robotic’ feedback subroutines, while others are performed in the context of a vivid, ‘sharper than life’ and unnaturally memorable sensation, for example under pharmacologically enhanced neuroreception. Biochemically, did the bacillus chased by the white blood cell (above) feel something like ‘fear’? Does a lobster thrown into the boiling pot feel pain when it shrieks? Do the paramecia suddenly ‘realize’ they are being digested and ‘panic’ in: http://www.youtube.com/watch?v=pvOz4V699gk? Just personally speculating, here: while consciousness conceivably could be substrate neutral, it seems to be an extended phenotype that runs along an organic gamut, from bacillus and paramecium to jellyfish to tuna to crocodile to dog and cat to parrot to primate…so while AI could better and better mimic and simulate conscious-like behavior, the actual content and experience of consciousness may be tied to the (in Pinker’s words) 4 instinctual F’s of fight, flee, feed or sexually reproduce. If a machine is not truly motivated by something like that, can it truly be consciousness, as opposed to behaving as if conscious? But this still leaves the question whether such thoughts/feelings aren’t ultimately computable; surely there will be better and better simulations of such behaviors – how about better and better simulations of such motivations? Is there a point at which the mimicry or simulation of behaviors, via mimicry or simulation of the motivations, becomes essentially equivalent to reality? I have to think on that.
Going back to the original question, I am not at all sure that psychology or philosophy necessarily has to play a critical role in the creation of improved AI, and I am speaking as a psychological scientist with keen interests in the philosophy of mind so I am not at all downplaying these fields. It seems to me totally imaginable that the next advance in AI might occur in an atheoretical manner, with the problem simply yielding to the convergence of technology, demand, and probably luck.
Also, psychology will only matter if the aim of AI was to REPLICATE human thought. It is totally possible that future AI will be "intelligent" in ways very different from the way humans are intelligent, as much as they will be different from current AI. After all, why frame this problem in terms of human "intelligence" when there are other models of information processing out there.
Adrianne:"why frame this problem in terms of human "intelligence" when there are other models of information processing out there."
We are most familiar with human consciousness, and tend to think rightly or wrongly that it is the gold standard against which all other tests must be measured. Besides it was only Turing's Test that suggested that "Intelligence" was measurable, and that a machine wouldn't be truly "intelligent" until it could falsify human intelligence.
Now we know that machines can be intelligent in a number of other modes, and do not need to be human machines to be intelligent.
My personal belief is that a "Tissue Level Psychology" will lead in the direction of building a more human machine, if indeed that is what we want to do.
The exercise is valuable even if what we want is not a "Human" Machine, in that it will have ramifications related to the health and understanding of humans if we get the design right.
I asked the same question (https://www.researchgate.net/post/What_could_be_proof_of_consciousness), and now I think all Tests like Turing Test or similar will not lead to results, because consciousness is a subjective sensation and therefore not measurably. This is no reason to give up. I think, we should ask for indicators of behavior which essentially needs consciousness. This modification of the original question tends to the need of consciousness and the evolutionary sense of this phenomenon.