Voice recognition in Apple's 4G phones
Apple has been advertising on TV its new 4G phones with users asking questions like "will I need an umbrella tonight" and "where is my brother", where voice recognition (and some clever software to figure out the point of the question and expected answers, presumably) is obviously being used.
Apple has over 50 patent documents involving voice recognition, see this list. I do not know if they are for the software used in the actual phones.
For example, User profiling for voice input processing enables the phone to recognise the user's voice. It explains that it can be used in the dark, or by blind people. The user can preselect words as metadata, perhaps as keys for specific commands. Paragraph 0005 mentions "the electronic device can require significant resources to parse complex instructions", and the point of the invention is to restrict the "library" of words that it will try to interpret to those by the owner of the device to save on memory. The main drawing is given below.
A big problem must be working out what is meant by what, to the software, is simply noise. Apple's Semantic reconstruction patent application explains a method of carrying out linguistic analysis so that what is being asked for can be understood.
Many do not realise how many patent documents there are (I've been asked if I look at each newly published patent): in the related sector of the creation of a thesaurus of terms, there are over 130 results, and these are merely those coded as G06F17/30TGT in the ECLA classification, which omits patent documents published in the Far East. For those countries, the broader G0617/30 class would be used, with words to narrow down the search. Unless of course they had been published in the West. And that means selecting words that have the right meaning, those used in linguistics.
Someone once said to me that patent searching wasn't rocket science. No, it's not -- it's often harder.