Artificial Intelligence (AI) was seen as the new revolution (and it still is) some years back. Nevertheles, recent discussions seem to present an aspect of AI that deviates from what humans anticipated, including potential “takeovers”.
According to a post by Professor Philip Goff (2019) on The Conversation, consciousness is unobservable. It is nearly impossible to see someone’s feelings merely by looking at them. And since you can’t look inside their heads and judge the same, we prefer to make inferences. When it comes to immeasurable parameters, the famous correlation analysis leads the talk.
In the digital domain, about 90% of these inferences are data driven. But can these data be always right and can they define consciousness (if they do arise)? Obviously not. If so or otherwise, should we be worried that AI will someday gain consciousness (to surprise humanity, as discussed in recent debates) especially when we almost absolutely rely on data that are only partly understood from these ”machines” ?