While having the concept of Self as opposed to others or to the environment seems good for focusing the organism functions on survivability and on DNA spreading, is there any evidence that consciousness has an evolutionary advantage?

To elaborate further, here I'm talking about consciousness as the first person experience. And for "first person experience" I'm not talking about "experience OF first person": conversly, I'm specifically addressing the "experience IN first person MODALITY" (as a corollary to this question, I'm proposing that the word "consciousness" refers to too many concepts). In this view, I consider self-consciousness "experience of first person in first person modality".

If we embrace the assumption that consciousness is always consciousness of something, we still lack an explanation for the nature and the purpose ("what is/what's for" rather than "how is it") of the first person experience, and as such why evolution favored it.

In a lot of other Q/A about self and consciousness people are talking about consctructs that may function even without consciousness. Two examples:

-self: a neural network comprising semantic concepts about the world could very well include the concept of self as a non-other or non-environment, or even a concept of self as an independent organism with such and such features; why do we need consciousness to conceptualize it? Would a machine decoding all the concepts coming across the node of (or the distributed knowledge about) self be considered conscious? We do not have to attribute consciousness to the machine to explain the machine processing its concept of self.

-thinking: processing is certainly different from consciously elaborate something, as all the studies on automatic and subconscious processing show. On the other hand, this point address the free will problem: when we consciously elaborate something, does it mean we are voluntarly doing so? Or are we just experiencing a first person "show" of something already happened subconsciously (as Libet's studies suggest)? Without touching upon the ad infinitum regression problems, this poses the question if consciousness is useful without free will: if the conscious experience is just a screen on which things are projected, no free will is needed and thus what's the whole point of consciousness? As such, do we also need free will for accepting consciousness? If we are working with the least number of assumptions, it seems unlikely the we can accept consciousness.

It seems to me that the general attitude of cognitive theories in a biological information processing/computational theory of mind framework is to try to explain everything without putting consciousness in the equation. And indeed it seems to me that no one is actually putting consciousness in the equation, when explaining cognition or behaviour (at least in modern times).

All in all, it seems to me that all the above reasonings bring the suggestion that consciousness is not needed and has no evolutionary advantage over automatic non-conscious entities. Or that we should make more and more assumptions (such as accepting free will) to make sense of consciousness.

I think that asking why we have consciousness could lead us to understand it better.

More Alan Mattiassi's questions See All
Similar questions and discussions