As we use AI in our daily human activities more than ever before, it’s fair to say that we keep on giving a greater portion of responsibility to these so called intelligent algorithmic computer programs. But stripping away responsibilities from the human and giving them to AI, are we actually ready for that? And perhaps more importantly, is AI ready for that? Because who is responsible when AI crashes your self-driving Tesla? Who should take the blame when AI algorithms are inherently biased, as was the case during the Dutch childcare benefits scandal? And what should we do when AI software inflates the cost of Uber fares, which is exactly what happened after the London terrorist attacks in 2017?
It is just a way of representation via a digitalization and visualisation and computations over various platforms. The need is., however, to know and understand AI with its needs to various applications.
I am listening about AI since 45 years now. The first time I heard about it was before personal and toy computer era of early 80s. The AI computer was then a PC board with z80 and a couple of 7-segment displays, much "stupider" than pocket calculators of that age (mid 70s).
Has AI become any more close to intelligence by now?
As a crowbar increases human physical strength, can we say that AI of today is no more than a very fancy mental crowbar, just a stupid tool that enhances our mental power?