One of the foremost contributions of Turing to computer science is the concept of a "Universal Machine" which can emulate other machines of arbitrary complexity, performance considerations notwithstanding. Indeed, much of the success of modern computers can be attributed to this trait, which allows us to build applications (abstract "machines") of ever increasing complexity on top of relatively simple hardware architectures.
Thinking about this one day, I was struck by a curious similarity between the concept of a Universal Machine and our own cognitive abilities:
* A Universal Machine can simulate other machines of arbitrary complexity without need of any significant change to its own structure;
* Human brains can learn behaviors of seemingly unending variety, yet the rate of morphological change over life doesn't seem to account for this ability – and in fact, a great deal of what change we go through (mostly during childhood) seems more linked to developing the *ability to learn* than to the learning of any specific behavior.
If our brains are assimilating new behaviors all the time, yet they're not physically changing to accommodate these novel abilities, could be they do this in a way similar to how a Universal Machine works – that is, by simulating other constructs not unlike itself? Is the brain a Universal Neural Network, which learns new behaviors by simulating networks tailored to its requirements?
Making a "real" device able to simulate any number of "imaginary" variations of its basic design looks, at first sight, like a simple and elegant way to add extensibility to a system that cannot undergo much morphological change. And it seems a natural step, evolution-wise, for biological neural networks to duplicate their own mechanics on a higher level as they grow in sophistication. So surely someone must have thought of this before? Yet I couldn't find any reference to this idea.
What do you make of this idea? Are there clear arguments against it – things on the "any fool knows" category – either from neuroscience or artificial neuron network theory, that would outright disprove the conjecture and/or otherwise make it seem a waste of time? Did anybody think of this before?