It might be an application of forward error correction, using the redundancy inherent in written English. Instead of interpreting pressing the 'F' key as the letter 'F' with 100% probability, one could interpret pressing 'F' as meaning a 60% probability of 'F' and 5% probability for each of 'E', 'D', 'C', 'V', 'B', 'G', 'T', 'R' (the surrounding letters on a QWERTY keyboard). Concatenate successive key pressed to form codewords, then use FEC to find the dictionary word which has the maximum likelihood.
Getting fancy, one could use the results to train the system, fine tuning the probabilities as it goes. In this way, it doesn't matter where you type (or even the spacial relationship of the "keys"), as long as the location you use for each letter averages out to be reasonably consistent. In time the system will train itself to so English words come out, meaning the key mappings are probably correct.
For example, one could start typing "QWERTY" style, change to typing "Dvorak" and the system would put out gibberish for a while, then start putting out English again as the probabilities converged.
Getting fancy, one could use the results to train the system, fine tuning the probabilities as it goes. In this way, it doesn't matter where you type (or even the spacial relationship of the "keys"), as long as the location you use for each letter averages out to be reasonably consistent. In time the system will train itself to so English words come out, meaning the key mappings are probably correct.
For example, one could start typing "QWERTY" style, change to typing "Dvorak" and the system would put out gibberish for a while, then start putting out English again as the probabilities converged.