Contains types for implementing recognition of speech and Dual-Tone Multi-Frequency (DTMF) tones.
The Microsoft Speech Platform SDK 11 offers a basic speech recognition infrastructure that digitizes acoustical signals, and recovers words, speech elements from audio input, and also processes DTMF tones.
Applications use the Microsoft.Speech.Recognition namespace to access and extend this basic speech recognition technology by defining algorithms for identifying and acting on specific phrases or word patterns, and by managing the runtime behavior of this speech infrastructure.
You create grammars, which consist of a set of rules or constraints, to define words and phrases that your application will recognize as meaningful input. Using a constructor for the Grammar class, you can create a grammar object at runtime from GrammarBuilder or SrgsDocument instances, or from a file, a string, or a stream that contains a definition of a grammar.
Using the GrammarBuilder and Choices classes, you can build grammar documents programmatically that do not depend on or conform to the Speech Recognition Grammar Specification 1.0 (SRGS). To create SRGS-compliant grammars programmatically, use the types and members of the Microsoft.Speech.Recognition.SrgsGrammar name space.
Manage Speech Recognition Engines
Instances of SpeechRecognitionEngine provide access to installed recognizers (Runtime Languages) to perform speech recognition. A Runtime Language includes the language model, acoustic model, and other data necessary to provision a speech engine to perform speech recognition in a particular language. See InstalledRecognizers() for more information.
You can use SpeechRecognitionEngine to initialize a speech recognition engine instance, select an installed Runtime Language to use for recognition, load and unload grammars, subscribe to speech recognition events, configure the audio input, start and stop recognition, and modify properties of the speech recognition engine that affect recognition.
See Initialize and Manage a Speech Recognition Engine (Microsoft.Speech) in the Microsoft Speech Programming Guide for more information.
The DtmfRecognitionEngine class provides control over engines that recognize DTMF tones.
Respond to Events
SpeechRecognitionEngine objects generate events in response to audio input to the speech recognition engine. The AudioLevelUpdated, AudioSignalProblemOccurred, AudioStateChanged events are raised in response to changes in the incoming signal. The SpeechDetected event is raised when the speech recognition engine identifies incoming audio as speech. The speech recognition engine raises the SpeechRecognized event when it matches speech input to one of its loaded grammars, and raises the SpeechRecognitionRejected when speech input does not match any of its loaded grammars.
Other types of events include the LoadGrammarCompleted event which a speech recognition engine raises when it has loaded a grammar.
The DtmfRecognitionEngine generates a similar set of events, but in response to receiving DTMF tones, rather than speech, as input.
You can register to be notified for events that the speech recognition engine raises and create handlers using the EventsArgs classes associated with each of these events to program your application's behavior when an event is raised.
|AudioLevelUpdatedEventArgs||Provides data for the AudioLevelUpdated event.|
|AudioSignalProblemOccurredEventArgs||Provides data for the AudioSignalProblemOccurred event.|
|AudioStateChangedEventArgs||Provides data for the AudioStateChanged event of the SpeechRecognitionEngine class.|
|Choices||Represents a list of alternative items to make up an element in a grammar.|
|DtmfHypothesizedEventArgs||Provides data for the DtmfHypothesized event.|
|DtmfRecognitionEngine||Provides access to DTMF recognition services.|
|DtmfRecognitionRejectedEventArgs||Provides data for the DtmfRecognitionRejected event.|
|DtmfRecognizeCompletedEventArgs||Returns information from the RecognizeCompleted event.|
|DtmfRecognizedEventArgs||Provides data for the DtmfRecognized event.|
|EmulateRecognizeCompletedEventArgs||Provides data for the EmulateRecognizeCompleted event.|
|Grammar||A run-time object that references a speech recognition grammar, which an application can use to define the constraints for speech recognition.|
|GrammarBuilder||Provides a mechanism for programmatically building the constraints for a speech recognition grammar.|
|GrammarInfo||Represents an object that contains static information about a speech recognition grammar. In contrast, a Grammar object contains information about a speech recognition grammar that is loaded into a speech recognizer at run time.|
|GrammarInfoPartsCollection||Represents a collection of speech recognition engine parts.|
|InvalidCultureException||Thrown when a recognizer attempts to reference a grammar that specifies an unsupported or uninstalled language.|
|LoadGrammarCompletedEventArgs||Provides data for the LoadGrammarCompleted event.|
|RecognitionEventArgs||Provides information about speech recognition events.|
|RecognitionResult||Contains detailed information about input that was recognized by instances of SpeechRecognitionEngine or DtmfRecognitionEngine.|
|RecognizeCompletedEventArgs||Provides data for the RecognizeCompleted event.|
|RecognizedAudio||Represents audio input that is associated with a RecognitionResult.|
|RecognizedPhrase||Contains detailed information, generated by the speech recognition engine, about the recognized input.|
|RecognizedWordUnit||Provides the atomic unit of recognized speech.|
|RecognizerInfo||Represents information about a SpeechRecognitionEngine object.|
|RecognizerUpdateReachedEventArgs||Provides data for the RecognizerUpdateReached event.|
|ReplacementText||Contains information about a speech normalization procedure that has been performed on recognition results.|
|SemanticResultKey||Associates a key string with SemanticResultValue values to define the SemanticValue objects.|
|SemanticResultValue||Represents a semantic value and optionally associates the value with a component of a speech recognition grammar.|
|SemanticValue||Represents the semantic organization of a recognized phrase.|
|SpeechDetectedEventArgs||Provides data for the SpeechDetected event.|
|SpeechHypothesizedEventArgs||Infrastructure. Provides data for the SpeechHypothesized event.|
|SpeechRecognitionEngine||Provides the means to access and manage a speech recognition engine.|
|SpeechRecognitionRejectedEventArgs||Provides information for the SpeechRecognitionRejected event.|
|SpeechRecognizedEventArgs||Provides information for the Grammar.SpeechRecognized and SpeechRecognitionEngine.SpeechRecognized events.|
|AudioSignalProblem||Contains a list of possible problems in the audio signal coming in to a speech recognition engine.|
|AudioState||Contains a list of possible states for the audio input to a speech recognition engine.|
|DisplayAttributes||Lists the options that the SpeechRecognitionEngine object can use to specify white space for the display of a word or punctuation mark.|
|DtmfTone||Enumerates values of the first 16 DTMF tones.|
|EmulateOptions||Specifies the type of recognition operation that is performed by EmulateRecognize() and EmulateRecognizeAsync() methods that take this object as a parameter.|
|Pronounceable||Lists values that represent whether the SpeechRecognitionEngine can pronounce a word and whether the pronunciation is defined in a lexicon.|
|RecognizeMode||Enumerates values of the recognition mode.|
|SubsetMatchingMode||Enumerates values of subset matching mode.|