"1. AI can completely take over for human workers.
While artificial intelligence can take over basic and repetitive processes (e.g., data entry), it doesn’t have the cognitive capabilities at the moment to take over many professional tasks, according to the Spiceworks article, “10 Most Common Myths About AI.” For example, while large language models like ChatGPT can create content, they still need prompts and human editors since the AI can “hallucinate” and fabricate details and sources.
2. AI tools are always accurate and objective.
AI tools – like the humans who created them – have biases. Additionally, generative AI can and will produce plausible-sounding but false answers in response to prompts, according to the Google Cloud blog entry, “The Prompt: Debunking five generative AI misconceptions.”
It’s wise and necessary to fact-check everything ChatGPT and other AI tools tell you. Otherwise, you could end up like the lawyers who had to explain to an unamused judge why they submitted an AI-created court filing that referenced fictional past cases, according to the Associated Press.
3. Only large enterprises can benefit from AI.
Artificial intelligence isn’t just for large corporations. In fact, since one of the top reasons to deploy AI is to save time by automating basic and mundane tasks, it’s particularly useful for smaller companies that are short-staffed and looking to increase productivity while keeping costs down. For more information, check out our previous blog entry, “5 tips for small business leaders interested in generative AI.”
4. AI models trained on larger data sets are better.
The quality and type of data matters more than the quantity, according to Spiceworks. To maximize your AI ROI, you want solutions trained on information that is accurate, properly formatted, and relevant to your business. Human workers must properly label any data to ensure AI can read it.
5. Artificial intelligence can learn on its own.
While solutions with machine learning capabilities can modify their responses based on previous interactions, they still need human data scientists to select and prepare training data and update the software, according to the Gartner article, “6 AI Myths Debunked.”
A bank employee tried to sell me a special fund. "This one is managed by an AI". Of course I rejected it, I want humans to bear the responsibility. People use the word AI like 'vegan', 'bio', or 'environmentally friendly' as marketing slogans.
I think one of the worst misconceptions about AI is that people think it is a single discipline. One often reads news like: "Using AI to provide the world with...", "Using AI, MIT researchers identify...", "Predicting ... with data and AI", "Argonne researchers use AI to optimize...", as if mentioning the word AI alone carried any meaning. Did the researchers ask a chatbot, did they use some kind of time series prediction? Was a neural net part of the work, genetic algorithms?
In the case of the fund I declined, I estimate that some kind of rule based system was involved to call it AI managed.