to find out whether AI-based humanoid robots can experience the same system errors as other AI systems (Hallucinations, Language Bias, Algorithmic Errors, and Misuse)
Yes — but in a very different sense from how humans hallucinate.
For a humanoid robot powered by AI, a “hallucination” wouldn’t be a sensory illusion caused by brain chemistry, but rather an error in perception, reasoning, or output caused by:
Faulty sensor data If a robot’s camera misinterprets shadows as obstacles, or its microphone picks up background noise and mistakes it for a command, it’s essentially “perceiving” something that isn’t there. This is like a perceptual hallucination for AI.
Faulty internal model or reasoning AI models (like large language models) can generate outputs that sound plausible but are factually wrong — this is also called a hallucination in AI terms. In a humanoid robot, this could mean planning actions based on false assumptions (e.g., believing there’s a door when the space is actually a wall).
Data processing conflicts If vision, lidar, and tactile sensors disagree due to calibration errors, the robot might “believe” contradictory things about its environment, similar to a cognitive hallucination.
Adversarial attacks If someone feeds the AI manipulated images or audio, it can “see” or “hear” things that don’t exist — a bit like being tricked into hallucinating.
Key difference from humans: Robots don’t have subjective consciousness, so they don’t feel these hallucinations — they’re just faulty interpretations or generated outputs. But from the outside, their actions could appear as if they’re “seeing” or “thinking” unreal things.
Yes, and while these hallucinations differ from human sensory illusions, they still pose real risks in humanoid robots, especially when powered by large language models. Humanoids are still in their infancy, and in workplace settings even occasional errors in perception or reasoning can undermine safety and trust. That’s why research is now focusing on solutions such as grounding AI outputs in real-time sensor data, cross-agent verification (e.g., CogniVera), and constraining robots to well-defined, validated tasks — steps aimed at making them reliable and trustworthy partners alongside humans (Ji et al., 2023; Kwon et al., 2024; Huang et al., 2024).