If we program an AI machine as an Independent tool which can gain knowledge and do whatever it wants. Then yes. They can learn and educated themselves very quickly and one day they will definitely destroy humans for power or will make humans its slave to rule.
"The question assumes there is already a strong super AI. Current machine learning AI does not have such a capability."
What American companies,Google AI , Tesla, Neuralink, IBM etc including NASA and DARPA, lack is strong AI technology.
We are in fact the FIRST lab in the world at the forefront of developing algorithms for implementation in Strong AI prototypes.
Weak AI cannot solve the Halting problem because it works with bits of information using Shannon information.
Strong AI can readily solve the Halting problem because it works with informational structures and, through their labilities, understanding of uncertainties that are presented. This is our definition of consciousness,
See
Book The act of understanding uncertainty is consciousness (This ...
Furthermore, we must use the continuity of mind to augment consciousness intrinsic to AFFECT but not cognition with the cognition. This entails replacing discrete symbolic structures with contiguous spaces of mental states, i.e., a contiguous manifold that is non-Euclidean.
The next phase is to develop Strong AI technology to make AI human-like with its mind.
All work that is classified or unclassified treats AI based on Turing computation. We were the first to publish work dealing with nonTuring computation, and this is the beginning of the emergence of Strong AI.
see
Article New insights into holonomic brain theory: implications for a...
Google has made claims that their “Lamda” AI is sentient. Google using DeepMind® will never have the sentient capability. The reason is that it relies on rule-based algorithms that run on deep learning principles. Notice that the information supplied to the Weak AI is conflated with the knowledge imposed by those programming the weak AI.
Phenomenologically, the 'I' is identical to awareness: 'I' = awareness. A weak AI can use deep learning algorithms to say “I” as a Turing computation.
Today, Google AI is doing such mimicking “sentience”. This is the best that they can do. For real sentience, nonTuring computation must come to the fore in the algorithm. As I said, no one has been able to achieve this. We were the first to publish several papers elucidating how nonTuring computation can simulate a conscious process.
Previous workers claim that nonTuring computation in the brain is connectionist and representational. But their definition excludes conscious processes and supports cognitive processing, and we have shown that neural network dynamics is irrelevant to active consciousness.
Second, consciousness is built from many different meanings attributed to the lability of the informational structure. When the “consciousness code” is decoded, there are infinite possibilities for different meanings to arise.
The “Lamda” AI based on DeepMind® has a finite repertoire of possibilities, limiting it to errors in situations that arise outside of the expected.
We have IBM’s Cognitive Mind, which relies on spiking neurons in limited cognitive tasks. For example, developing a “cognitive” brain-machine interface has not been possible because spiking neurons are inadequate for complex cognitive processing that relies on a multiscalar brain.
We are confident that Strong AI will emerge based on our work, and possibly new companies will replace Google. We are always looking for investors who can see the potential of Strong AI technology.