Artificial intelligence has come a long way since its inception, but we do not know if this science can make machines learn, understand or think. What is the limit? Can they ever be aware of themselves?
This century still has a long way to go. Machines with wetware components (e.g. biochips) may eventually have a causal substrate that enables them to overcome the strictures of merely syntactic manipulation of symbols.
This century still has a long way to go. Machines with wetware components (e.g. biochips) may eventually have a causal substrate that enables them to overcome the strictures of merely syntactic manipulation of symbols.
You are referring to hybrid machines, in the future when scientists start to experiment with human neural tissue, things can get very dangerous since a thinking machine without emotions or empathy can generate some rebellious activity. I hope that hybrid devices are never made, human can be smart but erratic.
In the 21st century, artificial intelligence technologies and learning machines will continue to be developed and new possibilities of using these technologies will arise in various sectors of the economy. Autonomous robots equipped with artificial intelligence will be created. However, robots are and will always be machines that do not have their own self-awareness of existence. If something that can be called artificial consciousness arises and artificial intelligence can be equipped with this kind of structure, it will never be the same as human consciousness of its existence.
Actually, no, I wasn't talking about hybrid human-AI machines. I was thinking about machines with biological or organic components, which could be synthetic lab-cultured bioengineered components that enable new kinds of computation following some hitherto unexploited biological principles. One criticism of Strong AI has been that it only simulates mentation or consciousness because it is merely syntactic and computers aren't made out of the right kind of stuff to support consciousness. My point is simply that biological or organic hardware components may provide the right kind of stuff that allows consciousness to emerge, even if it wasn't planned for or even if the how of it isn't understood. It needn't be a very interesting kind of consciousness; it may just be at the level of a dormouse or a lobster, and not include awareness of ongoing computation. On the other hand, it might also be of a high-level nonhuman alien nature....
If the quantum level is engaged "predetermination" may not be total in the sense that activity wouldn't be entirely constrained by programmers' goals. But more generally, a causal mechanism with an intended purpose may have unplanned and unpredictable side effects. So an AI with biocomponents might have unexpected emergent properties.
Yes, that is correct, the difference is that in our case this freedom was granted by God, therefore, by granting freedom to machines, it does not make us gods, rather fools. If we do not know how to use our free will, then we cannot provide its correct use in machines.