Hm...Possible. Several stayed with the impression that they had exchanges with AI agents on the Internet. The supposition is that an artificial agent can mislead and can look convincingly human-like when trained with huge amount of data.
What do I think... Here an old joke concerning AI:
"Flight AD316 you cross the trajectories of all lending flights, what happens there! Captain? Do you copy me?"
The cabin: "Hic! The captain is very drunk, Sir, he is not in the cabin, Sir."
"What! Pilot, what happens there? "
The cabin: "The crew, hic, had a party, Sir. They are all, hic, so to say asleep on the floor, Sir."
"Flight AD316, who is speaking from the cabin?!?"
The cabin: "The autopilot, Sir, ready to follow your instructions, hic!"
Problems can arise…will arise without an appropriate control.
One relevant piece of work is the SOPHIE system designed and implemented by Donald Michie and Claude Sammut and installed at the Sydney Powerhouse Museum just about 15 years ago, which established rapport with a human user, heuristically(?) switching from “goal mode” where the program provided information about the exhibits to “chat mode “where the program’s responses prompted users to speak about themselves.
Am I to understand that the “explainers” in the Qingdao project are “just” people? (NB I’m hoping to see for myself in the Spring when I plan to be in Qingdao.)
I’m confident other relevant systems have been set up, round the world!