Following a short discussion in another forum, in which reference was made to AI having a Theory of Mind (ToM), I am interested in other opinions from RG members who have an active interest in an Ethics of AI.
In a sense, one of the critical emergent capabilities (in my thinking: limitations) is the idea that AI may possess a ToM. This presents an argument that any AI with a ToM exhibits a form of imagination, since it can attribute mental states to human subjects. However, the very act of such attribution is, to my mind, problematic in the least.
I have written, elsewhere, that “…through subjectification, [a] ToM assumes a rationality of action that may be irrationally violated by an ‘Other’. We are shaken when an absurd action is taken by an ‘Other’ that appears to us as irrational, or wrong, or immoral, or illegal; we question mental states we have ascribed to that ‘Other’, and whether they are an ‘other’ at all. Thus …the axiomatic variability of an individual’s mental state ensures there can be no level of universal access to reality…” Allowing AI to (imaginatively) attribute mental states as part of its outputs raises ethical concerns we perhaps have not yet begun to grasp. Is the real problem here that, perhaps ironically, generative AI is more human-like in its processing (flaws and all) than we might have anticipated.
Interested in other perspectives...