I am curious about the differences between high-level empathy in AI such as ChatGPT in terms of responding to human cues for empathy vs. the state of compromised, sometimes spotty, less-than optimal empathic practices of human beings who may not be at their highest levels of compassion or empathy.

It is certain that when humans are at high levels of authentic empathy and real compassion -- such as when a parent or partner gives their full loving attention to the other -- they are superior to AI.

But what happens when busy, overworked therapists, researchers, and teachers are in overwhelm mode and must continue to use empathic signals? They may be finishing up deadlines, in over-compcassion-mode with multiple students, clients, patients, and participants? They may simply be practicing empathic cues and signals but not able to really feel what they show.

My question is: In these cases with flawed or compromised human empathy vs. full-on AI empathy, which style or empathic cue systems actually works better, which is superior, and which is hard to differentiate? Flawed human empathy or full-on AI empathy?

More Ava Lindberg's questions See All
Similar questions and discussions