you can find such systems in Word(TM) or Swift Key (TM).
I think there is no cognitive model behind but only statistic. If the model works well there are calculations of the distance (Levenshtein-distance) to similar words and the statistical frequency to the next two or three words.
A cognitive model would be necessary if you would explore the semantic context. This problem is still open ...
You asked about cognitive models of typing errors. Such models have been developed in the ACT-R architecture and are available on the associated webpage:
Here is a paper that identifies/models half a dozen different sources of errors. It is not just a matter of words that aren't in the dictionary, but common confusions (typographic, orthographic, semantic or phonetic):
David MW Powers (1997)
Learning and application of differential grammars
Proc. Meeting of the ACL Special Interest Group in Natural Language Learning, Madrid
This paper explicitly exclude style errors, but includes typos, phonos, grammos, frequens, foreignish, idiosyncractic.
Correcting typos specifically included the nearness on the qwerty keyboard, while phonos relates to phonetic similarity, while grammos relate to errors that change grammatical role or part of speech (with or without semantic impact).
An example of frequens (let your fingers do the walking) is words that are frequent substituting for similar words (not just homophones, but things like 'are/our').
The foreignish words are not just errors by foreigners, but there are words where incorrect usage by a foreigner in an influential position (like a textbook author) have influenced the language or its technical jargon. Closely related is when a prescriptive rule ( by teachers of a language not their own) is overused in contexts where it is gives the wrong result (because it doesn't reflect a true or native understanding of the language).