An AI system needs clear criteria upon which its intelligence development process could be evaluated, something that current self improvement approaches lack. This is due to the difference between performance parameters of an AI system and features specific to human intelligence like consciousness and intuition, and so it is unlikely that AI would self improve in a way that would degrade human intelligence. How do you view self improvement in AI within the context of human intelligence?

More Mohamed el Nawawy's questions See All
Similar questions and discussions