lee_malatesta voted 1 on this story.
I'm not a large fan of MS articles, nor do I think that the write up for this was sufficient, but the SF Gate's article is good for discussion for what it contained and what it left out.
I was quite surprised that Plotkin did not do his homework on the history of voice recognition. Mac users have been able to navigate since system 7 something or the other. OS/2 users have had both voice navigation and dictation built into OS/2 v4. I've seen rumors that Palm is aiming to build speach recognition into its next generation of handhelds, which might be built around the StrongARM chip instead of Motorola Dragonballs. Corel has been bundling Dragon Dictate into WordPerfect for a considerably long time now. IBM has already ported Via Voice to Linux. I don't know if it is out of beta yet, but I do know that it is included in the SuSE retail package.
In any case, speach recognition seems to be somewhat of a niche product. This is partly due to the level of inaccuracy in voice dictation. Even at a level of 99% accuracy, it will still be more time consuming to dictate and go back and correct typos than it will be to type for most, not all, people. I also don't think that very many voice products realize that more than one person needs to use the product. Consider the case of a family of five using a computer trained to listen to one voice.
Another one of the inherent problems with speach recognition is the windowing application paradigm. There is no quick, intuitive way to tell the computer to start at the second paragraph fromt the top of the screen, cut the dependant clause out of the fourth sentence and paste it into the preceding paragraph. It takes a fraction of a second to do this with a mouse.
Lastly, while I do know that some people would have a much easier time if voice recognition were built into computers, as I am a relatively quick typist, there would only be marginal incentive for me to use such software. Now what would be really hot, is a thought interface so that I could just think input into the machine. I don't know that I'd want a two way street there, with the machine giving neural feedback, but I'd love to be able to just think code and watch it scroll by on my terminal.
The recent article on using lamprey brain stems to control robots gives me hopes that such a neural interface might be functional in my life time.