AI today is not conscious, and we don't know how to create consciousness in machines. Even if it were to become conscious someday, that wouldn’t mean it would see humans as unnecessary. Whether AI views humans as valuable depends on the goals, values, and ethics we build into it—not some inevitable realization about our imperfections.
AI at present, and for the foreseeable future, is purely binary in nature. Modern science does not, again at present and for the foreseeable future, understand what consciousness is beyond that it is somehow quantum in nature, reacting in ways that are not totally predictable and clearly not binary. If science does not understand consciousness, how can science apply it to AI?