SpeechRecognizer SpeechRecognizer SpeechRecognizer SpeechRecognizer Class

Definition

Enables speech recognition with either a default or a custom graphical user interface (GUI).

public : sealed class SpeechRecognizer : IClosable, ISpeechRecognizer, ISpeechRecognizer2
public sealed class SpeechRecognizer : IDisposable, ISpeechRecognizer, ISpeechRecognizer2
Public NotInheritable Class SpeechRecognizer Implements IDisposable, ISpeechRecognizer, ISpeechRecognizer2
var speechRecognizer = new speechRecognizer();
Attributes
Windows 10 requirements
Device family
Windows 10 (introduced v10.0.10240.0)
API contract
Windows.Foundation.UniversalApiContract (introduced v1)

Remarks

CompileConstraintsAsync must always be called before RecognizeAsync or RecognizeWithUIAsync, even if no constraints are specified in the Constraints property.

Constructors

SpeechRecognizer() SpeechRecognizer() SpeechRecognizer() SpeechRecognizer()

Creates a new instance of the SpeechRecognizer class.

public : SpeechRecognizer()
public SpeechRecognizer()
Public Sub New()
var speechRecognizer = new speechRecognizer();
See Also

SpeechRecognizer(Language) SpeechRecognizer(Language) SpeechRecognizer(Language) SpeechRecognizer(Language)

Creates a new instance of the SpeechRecognizer class with a language specifier.

public : SpeechRecognizer(Language language)
public SpeechRecognizer(Language language)
Public Sub New(language As Language)
var speechRecognizer = new speechRecognizer(language);
Parameters
language
Language Language Language Language

The spoken language to use for recognition.

Remarks

Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.

CurrentLanguage is set to the value of language.

Error codes

SPERR_WINRT_UNSUPPORTED_LANG (0x800455bc)

Thrown if the specified language is not supported.

See Also

Properties

Constraints Constraints Constraints Constraints

Gets the collection of constraint objects that are associated with the SpeechRecognizer object.

public : IVector<ISpeechRecognitionConstraint> Constraints { get; }
public IList<ISpeechRecognitionConstraint> Constraints { get; }
Public ReadOnly Property Constraints As IList<ISpeechRecognitionConstraint>
var iList = speechRecognizer.constraints;
Value
IVector<ISpeechRecognitionConstraint> IList<ISpeechRecognitionConstraint> IList<ISpeechRecognitionConstraint> IList<ISpeechRecognitionConstraint>

The ISpeechRecognitionConstraint objects associated with the speech recognizer. Valid constraint objects include:

Only SpeechRecognitionListConstraint and SpeechRecognitionGrammarFileConstraint objects can be mixed in the same collection.

Remarks

To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the "Get to know me" option in the Settings -> Privacy -> Speech, inking, and typing page. See "Recognize speech input" in Speech recognition.

See Also

ContinuousRecognitionSession ContinuousRecognitionSession ContinuousRecognitionSession ContinuousRecognitionSession

Gets the continuous recognition (dictation) session object (SpeechContinuousRecognitionSession ) associated with this SpeechRecognizer.

public : SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }
public SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }
Public ReadOnly Property ContinuousRecognitionSession As SpeechContinuousRecognitionSession
var speechContinuousRecognitionSession = speechRecognizer.continuousRecognitionSession;

Remarks

To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the "Get to know me" option in the Settings -> Privacy -> Speech, inking, and typing page. See "Recognize speech input" in Speech recognition.

See Also

CurrentLanguage CurrentLanguage CurrentLanguage CurrentLanguage

Gets the language used for speech recognition.

public : Language CurrentLanguage { get; }
public Language CurrentLanguage { get; }
Public ReadOnly Property CurrentLanguage As Language
var language = speechRecognizer.currentLanguage;
Value
Language Language Language Language

The language used for speech recognition.

Remarks

CurrentLanguage is initialized with the value specified in the SpeechRecognizer(language) constructor. If no language is specified in the SpeechRecognizer() constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.

See Also

State State State State

Gets the state of the speech recognizer.

public : SpeechRecognizerState State { get; }
public SpeechRecognizerState State { get; }
Public ReadOnly Property State As SpeechRecognizerState
var speechRecognizerState = speechRecognizer.state;
See Also

SupportedGrammarLanguages SupportedGrammarLanguages SupportedGrammarLanguages SupportedGrammarLanguages

Gets the collection of languages supported by the custom grammars of the SpeechRecognitionGrammarFileConstraint and SpeechRecognitionListConstraint objects specified in the Constraints property.

public : static IVectorView<Language> SupportedGrammarLanguages { get; }
public static IReadOnlyList<Language> SupportedGrammarLanguages { get; }
Public Static ReadOnly Property SupportedGrammarLanguages As IReadOnlyList<Language>
var iReadOnlyList = Windows.Media.SpeechRecognition.SpeechRecognizer.supportedGrammarLanguages;
Value
IVectorView<Language> IReadOnlyList<Language> IReadOnlyList<Language> IReadOnlyList<Language>

The collection of grammar languages.

Remarks

Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar, described here, that is installed with your app (speech recognition using a custom constraint is performed on the device).

  • Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all words in a grammar also improves recognition accuracy, as the speech recognition engine must only process speech to confirm a match. The list can also be programmatically updated.
  • A Speech Recognition Grammar Specification (SRGS) grammar is a static document that, unlike a programmatic list constraint, uses the XML format defined by the Version 1.0. An Speech Recognition Grammar Specification (SRGS) grammar provides the greatest control over the speech recognition experience by letting you capture multiple semantic meanings in a single recognition.
See Also

SupportedTopicLanguages SupportedTopicLanguages SupportedTopicLanguages SupportedTopicLanguages

Gets the collection of languages supported by the pre-defined, web-service grammars of the SpeechRecognitionTopicConstraint objects specified in the Constraints property.

public : static IVectorView<Language> SupportedTopicLanguages { get; }
public static IReadOnlyList<Language> SupportedTopicLanguages { get; }
Public Static ReadOnly Property SupportedTopicLanguages As IReadOnlyList<Language>
var iReadOnlyList = Windows.Media.SpeechRecognition.SpeechRecognizer.supportedTopicLanguages;
Value
IVectorView<Language> IReadOnlyList<Language> IReadOnlyList<Language> IReadOnlyList<Language>

The collection of languages supported by the pre-defined, web-service grammars.

Remarks

Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar that is installed with your app.

Predefined dictation and web-search grammars provide speech recognition for your app without requiring you to author a grammar. When using these grammars, speech recognition is performed by a remote web service and the results are returned to the device

See Also

SystemSpeechLanguage SystemSpeechLanguage SystemSpeechLanguage SystemSpeechLanguage

Gets the speech language of the device specified in Settings > Time & Language > Speech.

public : static Language SystemSpeechLanguage { get; }
public static Language SystemSpeechLanguage { get; }
Public Static ReadOnly Property SystemSpeechLanguage As Language
var language = Windows.Media.SpeechRecognition.SpeechRecognizer.systemSpeechLanguage;
Value
Language Language Language Language

The speech language of the device, or null if a speech language is not installed.

Remarks

Speech languages are added from the Settings > Time & Language > Region & language screen.

  1. Click Add a language.
  2. Select a language from the Add a language screen.
  3. Depending on the language selected, a language region screen might be displayed. Select the region.
  4. From the Settings > Time & Language > Region & language screen, select the language and click Options.
  5. If a speech language is available for the selected language and region, a Download button is displayed on the Language options screen. Click this button to download and install the speech language.

If no language is specified in the SpeechRecognizer() constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.

See Also

Timeouts Timeouts Timeouts Timeouts

Gets how long a speech recognizer ignores silence or unrecognizable sounds (babble) and continues listening for speech input.

public : SpeechRecognizerTimeouts Timeouts { get; }
public SpeechRecognizerTimeouts Timeouts { get; }
Public ReadOnly Property Timeouts As SpeechRecognizerTimeouts
var speechRecognizerTimeouts = speechRecognizer.timeouts;
See Also

UIOptions UIOptions UIOptions UIOptions

Gets the UI settings for the RecognizeWithUIAsync method.

public : SpeechRecognizerUIOptions UIOptions { get; }
public SpeechRecognizerUIOptions UIOptions { get; }
Public ReadOnly Property UIOptions As SpeechRecognizerUIOptions
var speechRecognizerUIOptions = speechRecognizer.uiOptions;
See Also

Methods

Close() Close() Close() Close()

Disposes the speech recognizer by freeing, releasing, or resetting allocated resources.

public : void Close()
This member is not implemented in C#
This member is not implemented in VB.Net
speechRecognizer.close();
Exceptions
ObjectDisposedException ObjectDisposedException ObjectDisposedException ObjectDisposedException

Thrown if either RecognizeAsync or RecognizeWithUIAsync is in progress.

Remarks

If a SpeechContinuousRecognitionSession is underway, Close is functionally equivalent to calling CancelAsync.

See Also

CompileConstraintsAsync() CompileConstraintsAsync() CompileConstraintsAsync() CompileConstraintsAsync()

Asynchronously compile all constraints specified by the Constraints property.

public : IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()
public IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()
Public Function CompileConstraintsAsync() As IAsyncOperation( Of SpeechRecognitionCompilationResult )
var iAsyncOperation = speechRecognizer.compileConstraintsAsync();
Returns

Remarks

CompileConstraintsAsync must always be called before RecognizeAsync or RecognizeWithUIAsync, even if no constraints are specified in the Constraints property.

This method returns an error if:

  • SpeechRecognizerState is not Idle or Paused.
  • One or more constraints are specified when the recognition session is initialized, recognition is Paused, all constraints are removed, and recognition is resumed.
  • No constraints are specified when the recognition session is initialized, recognition is Paused, constraints are added, and recognition is resumed.

To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the "Get to know me" option in the Settings -> Privacy -> Speech, inking, and typing page. See "Recognize speech input" in Speech recognition.

See Also

Dispose() Dispose() Dispose() Dispose()

Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.

This member is not implemented in C++
void Dispose()
Sub Dispose
void Dispose()

RecognizeAsync() RecognizeAsync() RecognizeAsync() RecognizeAsync()

Begins a speech recognition session for a SpeechRecognizer object.

public : IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()
public IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()
Public Function RecognizeAsync() As IAsyncOperation( Of SpeechRecognitionResult )
var iAsyncOperation = speechRecognizer.recognizeAsync();
Returns
See Also

RecognizeWithUIAsync() RecognizeWithUIAsync() RecognizeWithUIAsync() RecognizeWithUIAsync()

Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.

public : IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()
public IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()
Public Function RecognizeWithUIAsync() As IAsyncOperation( Of SpeechRecognitionResult )
var iAsyncOperation = speechRecognizer.recognizeWithUIAsync();
Returns

Remarks

The UI mechanisms supported by RecognizeWithUIAsync are specified by the UIOptions property.

See Also

StopRecognitionAsync() StopRecognitionAsync() StopRecognitionAsync() StopRecognitionAsync()

Asynchronously ends the speech recognition session.

public : IAsyncAction StopRecognitionAsync()
public IAsyncAction StopRecognitionAsync()
Public Function StopRecognitionAsync() As IAsyncAction
var iAsyncAction = speechRecognizer.stopRecognitionAsync();
Returns

No object or value is returned when this method completes.

See Also

TrySetSystemSpeechLanguageAsync(Language) TrySetSystemSpeechLanguageAsync(Language) TrySetSystemSpeechLanguageAsync(Language) TrySetSystemSpeechLanguageAsync(Language)

Asynchronously attempts to set the system language used for speech recognition on an IoT device.

Note

This method is available only in Embedded mode.

public : static IAsyncOperation<Platform::Boolean> TrySetSystemSpeechLanguageAsync(Language speechLanguage)
public static IAsyncOperation<bool> TrySetSystemSpeechLanguageAsync(Language speechLanguage)
Public Static Function TrySetSystemSpeechLanguageAsync(speechLanguage As Language) As IAsyncOperation( Of bool )
var iAsyncOperation = Windows.Media.SpeechRecognition.SpeechRecognizer.trySetSystemSpeechLanguageAsync(speechLanguage);
Parameters
speechLanguage
Language Language Language Language

The BCP-47-based system language used for speech recognition.

Returns

An asynchronous operation that returns true if the set operation was a success. Otherwise, returns false.

Additional features and requirements
Device family
Windows 10 Fall Creators Update (introduced v10.0.16299.0)
API contract
Windows.Foundation.UniversalApiContract (introduced v5)

Remarks

Your app must declare the systemManagement capability, which lets apps access basic system administration privileges including locale, timezone, shut down, and reboot.

The systemManagement capability must include the iot namespace when you declare it in your app's package manifest.

<Capabilities><iot:Capability Name="systemManagement"/></Capabilities>

Use SystemSpeechLanguage to get the current system speech recognition language.

Use Windows.Globalization.Language.IsWellFormed to validate speechLanguage.

Events

HypothesisGenerated HypothesisGenerated HypothesisGenerated HypothesisGenerated

Occurs during an ongoing dictation session when a recognition result fragment is returned by the speech recognizer.

public : event TypedEventHandler HypothesisGenerated<SpeechRecognizer,  SpeechRecognitionHypothesisGeneratedEventArgs>
public event TypedEventHandler HypothesisGenerated<SpeechRecognizer,  SpeechRecognitionHypothesisGeneratedEventArgs>
Public Event TypedEventHandler HypothesisGenerated( Of ( Of SpeechRecognizer ), ( Of  SpeechRecognitionHypothesisGeneratedEventArgs ))
function onHypothesisGenerated(eventArgs){/* Your code */}


speechRecognizer.addEventListener("hypothesisGenerated", onHypothesisGenerated);
speechRecognizer.removeEventListener("hypothesisGenerated", onHypothesisGenerated);

Remarks

The result fragment is useful for demonstrating that speech recognition is processing input during a lengthy dictation session.

See Also

RecognitionQualityDegrading RecognitionQualityDegrading RecognitionQualityDegrading RecognitionQualityDegrading

This event is raised when an audio problem is detected that might affect recognition accuracy.

public : event TypedEventHandler RecognitionQualityDegrading<SpeechRecognizer,  SpeechRecognitionQualityDegradingEventArgs>
public event TypedEventHandler RecognitionQualityDegrading<SpeechRecognizer,  SpeechRecognitionQualityDegradingEventArgs>
Public Event TypedEventHandler RecognitionQualityDegrading( Of ( Of SpeechRecognizer ), ( Of  SpeechRecognitionQualityDegradingEventArgs ))
function onRecognitionQualityDegrading(eventArgs){/* Your code */}


speechRecognizer.addEventListener("recognitionQualityDegrading", onRecognitionQualityDegrading);
speechRecognizer.removeEventListener("recognitionQualityDegrading", onRecognitionQualityDegrading);
See Also

StateChanged StateChanged StateChanged StateChanged

This event is raised when a change occurs to the State property during audio capture.

public : event TypedEventHandler StateChanged<SpeechRecognizer,  SpeechRecognizerStateChangedEventArgs>
public event TypedEventHandler StateChanged<SpeechRecognizer,  SpeechRecognizerStateChangedEventArgs>
Public Event TypedEventHandler StateChanged( Of ( Of SpeechRecognizer ), ( Of  SpeechRecognizerStateChangedEventArgs ))
function onStateChanged(eventArgs){/* Your code */}


speechRecognizer.addEventListener("stateChanged", onStateChanged);
speechRecognizer.removeEventListener("stateChanged", onStateChanged);
See Also

See Also