SpeechRecognizer
SpeechRecognizer
SpeechRecognizer
SpeechRecognizer
Class
Definition
Some information relates to pre-released product which may be substantially modified before it’s commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Prerelease APIs are identified by a Prerelease label.
[Contains prerelease APIs.]
Enables speech recognition with either a default or a custom graphical user interface (GUI).
public : sealed class SpeechRecognizer : IClosable, ISpeechRecognizer, ISpeechRecognizer2public sealed class SpeechRecognizer : IDisposable, ISpeechRecognizer, ISpeechRecognizer2Public NotInheritable Class SpeechRecognizer Implements IDisposable, ISpeechRecognizer, ISpeechRecognizer2// You can use this class in JavaScript.
- Attributes
| Device family |
Windows 10 (introduced v10.0.10240.0)
|
| API contract |
Windows.Foundation.UniversalApiContract (introduced v1)
|
Remarks
CompileConstraintsAsync must always be called before RecognizeAsync or RecognizeWithUIAsync, even if no constraints are specified in the Constraints property.
Constructors
SpeechRecognizer() SpeechRecognizer() SpeechRecognizer() SpeechRecognizer()
Creates a new instance of the SpeechRecognizer class.
public : SpeechRecognizer()public SpeechRecognizer()Public Sub New()// You can use this method in JavaScript.
- See Also
SpeechRecognizer(Language) SpeechRecognizer(Language) SpeechRecognizer(Language) SpeechRecognizer(Language)
Creates a new instance of the SpeechRecognizer class with a language specifier.
public : SpeechRecognizer(Language language)public SpeechRecognizer(Language language)Public Sub New(language As Language)// You can use this method in JavaScript.
Remarks
Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.
CurrentLanguage is set to the value of language.
Error codes
SPERR_WINRT_UNSUPPORTED_LANG (0x800455bc)
Thrown if the specified language is not supported.
Properties
Constraints Constraints Constraints Constraints
Gets the collection of constraint objects that are associated with the SpeechRecognizer object.
public : IVector<ISpeechRecognitionConstraint> Constraints { get; }public IList<ISpeechRecognitionConstraint> Constraints { get; }Public ReadOnly Property Constraints As IList<ISpeechRecognitionConstraint>// You can use this property in JavaScript.
- Value
- IVector<ISpeechRecognitionConstraint> IList<ISpeechRecognitionConstraint> IList<ISpeechRecognitionConstraint> IList<ISpeechRecognitionConstraint>
The ISpeechRecognitionConstraint objects associated with the speech recognizer. Valid constraint objects include:
- SpeechRecognitionGrammarFileConstraint
- SpeechRecognitionListConstraint
- SpeechRecognitionTopicConstraint
- SpeechRecognitionVoiceCommandDefinitionConstraint
Only SpeechRecognitionListConstraint and SpeechRecognitionGrammarFileConstraint objects can be mixed in the same collection.
Remarks
To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the "Get to know me" option in the Settings -> Privacy -> Speech, inking, and typing page. See "Recognize speech input" in Speech recognition.
- See Also
ContinuousRecognitionSession ContinuousRecognitionSession ContinuousRecognitionSession ContinuousRecognitionSession
Gets the continuous recognition (dictation) session object (SpeechContinuousRecognitionSession ) associated with this SpeechRecognizer.
public : SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }public SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }Public ReadOnly Property ContinuousRecognitionSession As SpeechContinuousRecognitionSession// You can use this property in JavaScript.
- Value
- SpeechContinuousRecognitionSession SpeechContinuousRecognitionSession SpeechContinuousRecognitionSession SpeechContinuousRecognitionSession
The continuous recognition session object associated with this SpeechRecognizer.
Remarks
To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the "Get to know me" option in the Settings -> Privacy -> Speech, inking, and typing page. See "Recognize speech input" in Speech recognition.
- See Also
CurrentLanguage CurrentLanguage CurrentLanguage CurrentLanguage
Gets the language used for speech recognition.
public : Language CurrentLanguage { get; }public Language CurrentLanguage { get; }Public ReadOnly Property CurrentLanguage As Language// You can use this property in JavaScript.
Remarks
CurrentLanguage is initialized with the value specified in the SpeechRecognizer(language) constructor. If no language is specified in the SpeechRecognizer() constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.
State State State State
Gets the state of the speech recognizer.
public : SpeechRecognizerState State { get; }public SpeechRecognizerState State { get; }Public ReadOnly Property State As SpeechRecognizerState// You can use this property in JavaScript.
The speech recognizer state.
- See Also
SupportedGrammarLanguages SupportedGrammarLanguages SupportedGrammarLanguages SupportedGrammarLanguages
Gets the collection of languages supported by the custom grammars of the SpeechRecognitionGrammarFileConstraint and SpeechRecognitionListConstraint objects specified in the Constraints property.
public : static IVectorView<Language> SupportedGrammarLanguages { get; }public static IReadOnlyList<Language> SupportedGrammarLanguages { get; }Public Static ReadOnly Property SupportedGrammarLanguages As IReadOnlyList<Language>// You can use this property in JavaScript.
- Value
- IVectorView<Language> IReadOnlyList<Language> IReadOnlyList<Language> IReadOnlyList<Language>
The collection of grammar languages.
Remarks
Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar, described here, that is installed with your app (speech recognition using a custom constraint is performed on the device).
- Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all words in a grammar also improves recognition accuracy, as the speech recognition engine must only process speech to confirm a match. The list can also be programmatically updated.
- A Speech Recognition Grammar Specification (SRGS) grammar is a static document that, unlike a programmatic list constraint, uses the XML format defined by the Version 1.0. An Speech Recognition Grammar Specification (SRGS) grammar provides the greatest control over the speech recognition experience by letting you capture multiple semantic meanings in a single recognition.
SupportedTopicLanguages SupportedTopicLanguages SupportedTopicLanguages SupportedTopicLanguages
Gets the collection of languages supported by the pre-defined, web-service grammars of the SpeechRecognitionTopicConstraint objects specified in the Constraints property.
public : static IVectorView<Language> SupportedTopicLanguages { get; }public static IReadOnlyList<Language> SupportedTopicLanguages { get; }Public Static ReadOnly Property SupportedTopicLanguages As IReadOnlyList<Language>// You can use this property in JavaScript.
- Value
- IVectorView<Language> IReadOnlyList<Language> IReadOnlyList<Language> IReadOnlyList<Language>
The collection of languages supported by the pre-defined, web-service grammars.
Remarks
Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar that is installed with your app.
Predefined dictation and web-search grammars provide speech recognition for your app without requiring you to author a grammar. When using these grammars, speech recognition is performed by a remote web service and the results are returned to the device
SystemSpeechLanguage SystemSpeechLanguage SystemSpeechLanguage SystemSpeechLanguage
Gets the speech language of the device specified in Settings > Time & Language > Speech.
public : static Language SystemSpeechLanguage { get; }public static Language SystemSpeechLanguage { get; }Public Static ReadOnly Property SystemSpeechLanguage As Language// You can use this property in JavaScript.
The speech language of the device, or null if a speech language is not installed.
Remarks
Speech languages are added from the Settings > Time & Language > Region & language screen.
- Click Add a language.
- Select a language from the Add a language screen.
- Depending on the language selected, a language region screen might be displayed. Select the region.
- From the Settings > Time & Language > Region & language screen, select the language and click Options.
- If a speech language is available for the selected language and region, a Download button is displayed on the Language options screen. Click this button to download and install the speech language.
If no language is specified in the SpeechRecognizer() constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.
Timeouts Timeouts Timeouts Timeouts
Gets how long a speech recognizer ignores silence or unrecognizable sounds (babble) and continues listening for speech input.
public : SpeechRecognizerTimeouts Timeouts { get; }public SpeechRecognizerTimeouts Timeouts { get; }Public ReadOnly Property Timeouts As SpeechRecognizerTimeouts// You can use this property in JavaScript.
- Value
- SpeechRecognizerTimeouts SpeechRecognizerTimeouts SpeechRecognizerTimeouts SpeechRecognizerTimeouts
The timeout settings.
- See Also
UIOptions UIOptions UIOptions UIOptions
Gets the UI settings for the RecognizeWithUIAsync method.
public : SpeechRecognizerUIOptions UIOptions { get; }public SpeechRecognizerUIOptions UIOptions { get; }Public ReadOnly Property UIOptions As SpeechRecognizerUIOptions// You can use this property in JavaScript.
- Value
- SpeechRecognizerUIOptions SpeechRecognizerUIOptions SpeechRecognizerUIOptions SpeechRecognizerUIOptions
The UI settings.
- See Also
Methods
Close() Close() Close() Close()
Disposes the speech recognizer by freeing, releasing, or resetting allocated resources.
public : void Close()This member is not implemented in C#This member is not implemented in VB.Net// You can use this method in JavaScript.
Thrown if either RecognizeAsync or RecognizeWithUIAsync is in progress.
Remarks
If a SpeechContinuousRecognitionSession is underway, Close is functionally equivalent to calling CancelAsync.
- See Also
CompileConstraintsAsync() CompileConstraintsAsync() CompileConstraintsAsync() CompileConstraintsAsync()
Asynchronously compile all constraints specified by the Constraints property.
public : IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()public IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()Public Function CompileConstraintsAsync() As IAsyncOperation( Of SpeechRecognitionCompilationResult )// You can use this method in JavaScript.
The result of the constraints compilation as a SpeechRecognitionCompilationResult object.
Remarks
CompileConstraintsAsync must always be called before RecognizeAsync or RecognizeWithUIAsync, even if no constraints are specified in the Constraints property.
This method returns an error if:
- SpeechRecognizerState is not Idle or Paused.
- One or more constraints are specified when the recognition session is initialized, recognition is Paused, all constraints are removed, and recognition is resumed.
- No constraints are specified when the recognition session is initialized, recognition is Paused, constraints are added, and recognition is resumed.
To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the "Get to know me" option in the Settings -> Privacy -> Speech, inking, and typing page. See "Recognize speech input" in Speech recognition.
- See Also
Dispose() Dispose() Dispose() Dispose()
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
This member is not implemented in C++void Dispose()Sub Disposevoid Dispose()
RecognizeAsync() RecognizeAsync() RecognizeAsync() RecognizeAsync()
Begins a speech recognition session for a SpeechRecognizer object.
public : IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()public IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()Public Function RecognizeAsync() As IAsyncOperation( Of SpeechRecognitionResult )// You can use this method in JavaScript.
The result of the speech recognition session that was initiated by the SpeechRecognizer object.
- See Also
RecognizeWithUIAsync() RecognizeWithUIAsync() RecognizeWithUIAsync() RecognizeWithUIAsync()
Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.
public : IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()public IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()Public Function RecognizeWithUIAsync() As IAsyncOperation( Of SpeechRecognitionResult )// You can use this method in JavaScript.
The result of the speech recognition session as a SpeechRecognitionResult object.
Remarks
The UI mechanisms supported by RecognizeWithUIAsync are specified by the UIOptions property.
- See Also
StopRecognitionAsync() StopRecognitionAsync() StopRecognitionAsync() StopRecognitionAsync()
Asynchronously ends the speech recognition session.
public : IAsyncAction StopRecognitionAsync()public IAsyncAction StopRecognitionAsync()Public Function StopRecognitionAsync() As IAsyncAction// You can use this method in JavaScript.
No object or value is returned when this method completes.
- See Also
TrySetSystemSpeechLanguageAsync(Language) TrySetSystemSpeechLanguageAsync(Language) TrySetSystemSpeechLanguageAsync(Language) TrySetSystemSpeechLanguageAsync(Language)
Prerelease. Asynchronously attempts to set the system language used for speech recognition on an IoT device.
Note
This method is available only in Embedded mode.
public : static IAsyncOperation<PlatForm::Boolean> TrySetSystemSpeechLanguageAsync(Language speechLanguage)public static IAsyncOperation<bool> TrySetSystemSpeechLanguageAsync(Language speechLanguage)Public Static Function TrySetSystemSpeechLanguageAsync(speechLanguage As Language) As IAsyncOperation( Of bool )// You can use this method in JavaScript.
The BCP-47-based system language used for speech recognition.
An asynchronous operation that returns true if the set operation was a success. Otherwise, returns false.
| Device family |
Windows 10 Insider Preview (introduced v10.0.16257.0)
|
| API contract |
Windows.Foundation.UniversalApiContract (introduced v5)
|
Remarks
Your app must declare the systemManagement capability, which lets apps access basic system administration privileges including locale, timezone, shut down, and reboot.
The systemManagement capability must include the iot namespace when you declare it in your app's package manifest.
<Capabilities><iot:Capability Name="systemManagement"/></Capabilities>
Use SystemSpeechLanguage to get the current system speech recognition language.
Use Windows.Globalization.Language.IsWellFormed to validate speechLanguage.
Events
HypothesisGenerated HypothesisGenerated HypothesisGenerated HypothesisGenerated
Occurs during an ongoing dictation session when a recognition result fragment is returned by the speech recognizer.
public : event TypedEventHandler HypothesisGenerated<SpeechRecognizer, SpeechRecognitionHypothesisGeneratedEventArgs>public event TypedEventHandler HypothesisGenerated<SpeechRecognizer, SpeechRecognitionHypothesisGeneratedEventArgs>Public Event HypothesisGenerated<SpeechRecognizer, SpeechRecognitionHypothesisGeneratedEventArgs>// You can use this event in JavaScript.
Remarks
The result fragment is useful for demonstrating that speech recognition is processing input during a lengthy dictation session.
- See Also
RecognitionQualityDegrading RecognitionQualityDegrading RecognitionQualityDegrading RecognitionQualityDegrading
This event is raised when an audio problem is detected that might affect recognition accuracy.
public : event TypedEventHandler RecognitionQualityDegrading<SpeechRecognizer, SpeechRecognitionQualityDegradingEventArgs>public event TypedEventHandler RecognitionQualityDegrading<SpeechRecognizer, SpeechRecognitionQualityDegradingEventArgs>Public Event RecognitionQualityDegrading<SpeechRecognizer, SpeechRecognitionQualityDegradingEventArgs>// You can use this event in JavaScript.
- See Also
StateChanged StateChanged StateChanged StateChanged
This event is raised when a change occurs to the State property during audio capture.
public : event TypedEventHandler StateChanged<SpeechRecognizer, SpeechRecognizerStateChangedEventArgs>public event TypedEventHandler StateChanged<SpeechRecognizer, SpeechRecognizerStateChangedEventArgs>Public Event StateChanged<SpeechRecognizer, SpeechRecognizerStateChangedEventArgs>// You can use this event in JavaScript.
- See Also