In my opinion, the boundary between "mere symbol shuffling" and "actually understanding" is not as crisp as some people (e.g. Searle) seem to think. It is a matter of degree, it is a spectrum. Like when a computer program is proving a theorem: I would argue that the program as a whole does to some extent "understand" what it is doing; it is doing reasoning at a semantic level (to some extent). But of course this program is executed on hardware that is only capable of mere symbol shuffling. The same is in my opinion probably true for humans (e.g. human mathematicians): a human brain is probably at the lowest level only doing something like "mere symbol shuffling" (the atoms and molecules in your brain do nothing more than obey the laws of physics), but still, as a whole, actual intelligence and understanding is emerging from these low-level processes.
In my opinion, the boundary between "mere symbol shuffling" and "actually understanding" is not as crisp as some people (e.g. Searle) seem to think. It is a matter of degree, it is a spectrum. Like when a computer program is proving a theorem: I would argue that the program as a whole does to some extent "understand" what it is doing; it is doing reasoning at a semantic level (to some extent). But of course this program is executed on hardware that is only capable of mere symbol shuffling. The same is in my opinion probably true for humans (e.g. human mathematicians): a human brain is probably at the lowest level only doing something like "mere symbol shuffling" (the atoms and molecules in your brain do nothing more than obey the laws of physics), but still, as a whole, actual intelligence and understanding is emerging from these low-level processes.
let me know something about the large amount of people who "use" math without any truly idea about what is the meaning of "integration" for example. They may calculate an integral very fast without any signification in mind. In other way they are aware like"Isymbol's manager"?
Well, your example of integration is interesting because it reminded me another important study in cognitive sciences about learning. Larkin, McDermott, Simon & Simon (1980) studied how novices and experts approach problems in physics.
According to these authors, novices prefer backward chaining. They start with a formula providing the solution term as output variable. Recursively, they take then its input variables, to which they associate other formulas, and so on.. until they retrace all the processing from the input data to the requested output.
Conversely, experts prefer forward chaining. They start from the data, and they create a flow - more or less intuitively - going towards the solution.
We may see the first as a primitive processing. Using basic mathematical symbols and operators, we just apply the formula without really understanding what we are doing in the domain problem. In the second case, we are aware of our domain. Without making all processing explicit, our intuition gives us clues about which direction to take in order to achieve our solution.
This confirms my experience. I tend to feel to be aware of a matter only when I am finally able to visualize it simply, without using an explicit and detailed symbolic notation.
"I tend to feel to be aware of a matter only when I am finally able to visualize it simply, without using in any situation an explicit and detailed symbolic notation." (!)
Sometimes I see at this kind of question as a sum of imagery's pieces of art devote to continuously represent the "show of signification/simpliification" that we call "consciousness".
When you had simplificate something a number of internal rules is ready for you: may be theese rules, all together, are constitutive of the awareness? (See Von Helmotz for example).
This is possible also in allucinatory experiences: in my ward is now awake and partially aware a patient with a large cerebellar hemorrage complicated by an hydrocephalon. In a first stage of recovery he perceived all around a lot of fishes (shaks) and was very scared. In a second step the patient recognize as not real the delusion because of a "little realistic man in his mind" (a very paradoxical and intriguing way to construct a simplification). Now he is really aware about delusions: "sharks aere a product of my own brain".
On the other hand patient admit that the situation is "under control" because he is far to the sharks now. I suppose that the simplification is now functioning because of he is a neurological patient (the hardware is out of order): not the same whn the software (the sum of simplificative rules) is mafunctioning.
I fear I have not fully understood your message.. However, the use of the word simplification in my and your contribution is intriguing. Simplification is the main objective a reduction/divide et impera methodology, but at the same time, it results from choosing the right layer of abstraction when interpreting reality, for example connecting more abstract concepts to sensations and emotions (I wonder if this plays a role in dreaming). While the first is more rigorous but slower, the effectiveness of the second results from experience.
In the case of this patient, It seems that visualization pictures his situation of danger, but reduction let him understand that this is not real, and, consequently, visualization changes.
The chinese room has nothing to do with consciousness. It is only a symbol manipulating automata without any subjective impression. There is no association, no anticipation, no emotion, no memory, nothing than a stupid algorithm.