Thank you for linking to your interesting review. I have proposed the synaptic matrix as a minimal neuronal mechanism for long-term memory of visual images and neuronal tokens of these images. You might be interested in looking at Ch. 3, in *The Cognitive Brain*, "Learning, Imagery, Tokens, and Types: The Synaptic Matrix", here:
Thanks for your comment. I am familiar with Roll's work, and his Fig. 3 is similar to my synaptic matrix (1975, 1977, 1991). What isn't clear in Fig. 3 is how normalization of synaptic transfer weights is achieved and how competing parallel outputs are appropriately inhibited and reset. Another paper on learning and long-term memory that might interest you is Sparse Coding of Faces in a Neuronal Model, here:
http://people.umass.edu/trehub/sparscodtre.pdf
With regard to my question about explaining the SMTT hallucination, I haven't yet seen an explanation other than the retinoid model. If you come across an alternative explanation, I would greatly appreciate it if you would reference it here.
Thanks again for your interesting links. You are correct to point to the difference between image processing and semantic processing in the cognitive brain. fMRI analysis by Gallant and colleagues gives us a picture of the cortical "packaging" of semantic activity. My theoretical model of the neuronal mechanisms for visual-semantic processing (e.g., learning and memory evoked by natural movies) explains how the necessary neuronal mechanisms do the job. This kind of cognition depends on widely distributed axonal connections between the conscious retinoid system, pre-conscious synaptic matrices in the visual and audtory modalities, and the semantic networks. For a general idea of how this works, see The Cognitive Brain, "Building a Semantic Network", in particular Fig. 6.5, here:
You might also want to take a look at "Self-Directed Learning in a Complex Environment" where the neuronal model attaches a proper word to a newly learned object that it parses out of the novel environmental scene:
Yes, I think the correlations between neuron spiking, local field potentials, and BOLD measurements justifies this kind of research. But we want to know what kind of brain *mechanisms* do the cognitive work that Gallant's kind of research demonstrates. This is what the neuronal mechanisms and systems that are modeled in The Cognitive Brain aim to explain. Are there alternative theoretical models that have been tested and have demonstrated similar or better competence?
You asked about a speech-recognition algorithm based on the semantic network model in Ch.6 in The Cognitive Brain. I haven't attempted working out one, and I'm not aware of anyone else doing it. But it certainly seems feasible, and I wouldn't be surprised if others have developed such algorithms.
In reading the papers of Timothy Hubbard, I've recently become aware of the illusory-line-motion (ILM) phenomenon. In this paradigm, a cue appears and is followed by a stationary line that is presented in its entirety. However, one's conscious experience is of a line *expanding* from the end nearest to the cue to its far end to complete the line. It seems to me that this illusory experience can be explained by the same retinoid mechanisms that explain the seeing-more-than-is-there (SMTT) hallucination. In the retinoid model, the heuristic self-locus (selective attention) targets the retinoid representation of the edge of the line that is nearest to the cue and then sweeps across the image to terminate at the far edge of the line. In this process it adds successive increments of excitation, brightening from the near edge to the far edge the autaptic neurons that represent the line in retinoid space. At the same time, the shift-control cells that drive the motion of the heuristic self-locus move the retinoid image of the perceived line away from the cue. Incidentally, the same kind of retinoid mechanisms can explain the phi phenomenon and motion after-effects..