SpeechRecognizer SpeechRecognizer SpeechRecognizer SpeechRecognizer Class

Enables speech recognition with either a default or a custom graphical user interface (GUI).

Syntax

Declaration

public sealed class SpeechRecognizerpublic sealed class SpeechRecognizerPublic NotInheritable Class SpeechRecognizerpublic sealed class SpeechRecognizer

Remarks

CompileConstraintsAsync() must always be called before RecognizeAsync() or RecognizeWithUIAsync(), even if no constraints are specified in the Constraints property.

Constructors summary

Creates a new instance of the SpeechRecognizer class.

Creates a new instance of the SpeechRecognizer class with a language specifier.

Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.

Properties summary

Gets the collection of constraint objects that are associated with the SpeechRecognizer object.

Gets the continuous recognition session object (SpeechContinuousRecognitionSession ) associated with this SpeechRecognizer.

Gets the language used for speech recognition.

CurrentLanguage is initialized with the value specified in the @Windows.Media.SpeechRecognition.SpeechRecognizer.#ctor(Windows.Globalization.Language) constructor. If no language is specified in the @Windows.Media.SpeechRecognition.SpeechRecognizer.#ctor constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.

Gets the state of the speech recognizer.

Gets the collection of languages supported by the custom grammars of the SpeechRecognitionGrammarFileConstraint and SpeechRecognitionListConstraint objects specified in the Constraints property.

Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar, described here, that is installed with your app (speech recognition using a custom constraint is performed on the device).

  • Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all words in a grammar also improves recognition accuracy, as the speech recognition engine must only process speech to confirm a match. The list can also be programmatically updated.
  • A Speech Recognition Grammar Specification (SRGS) grammar is a static document that, unlike a programmatic list constraint, uses the XML format defined by the Version 1.0. An Speech Recognition Grammar Specification (SRGS) grammar provides the greatest control over the speech recognition experience by letting you capture multiple semantic meanings in a single recognition.

Gets the collection of languages supported by the pre-defined, web-service grammars of the SpeechRecognitionTopicConstraint objects specified in the Constraints property.

Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar that is installed with your app.

Predefined dictation and web-search grammars provide speech recognition for your app without requiring you to author a grammar. When using these grammars, speech recognition is performed by a remote web service and the results are returned to the device

Gets the speech language of the device specified in Settings > Time & Language > Speech.

Note

Speech languages are added from the Settings > Time & Language > Region & language screen.

  1. Click Add a language.
  2. Select a language from the Add a language screen.
  3. Depending on the language selected, a language region screen might be displayed. Select the region.
  4. From the Settings > Time & Language > Region & language screen, select the language and click Options.
  5. If a speech language is available for the selected language and region, a Download button is displayed on the Language options screen. Click this button to download and install the speech language.

Gets how long a speech recognizer ignores silence or unrecognizable sounds (babble) and continues listening for speech input.

Gets the UI settings for the RecognizeWithUIAsync() method.

Methods summary

Disposes the speech recognizer by freeing, releasing, or resetting allocated resources.

If a SpeechContinuousRecognitionSession is underway, Close() is functionally equivalent to calling CancelAsync().

Asynchronously compile all constraints specified by the Constraints property.

CompileConstraintsAsync() must always be called before RecognizeAsync() or RecognizeWithUIAsync(), even if no constraints are specified in the Constraints property.

Begins a speech recognition session for a SpeechRecognizer object.

Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.

The UI mechanisms supported by RecognizeWithUIAsync() are specified by the UIOptions property.

Asynchronously ends the speech recognition session.

Events summary

Occurs during an ongoing dictation session when a recognition result fragment is returned by the speech recognizer.

The result fragment is useful for demonstrating that speech recognition is processing input during a lengthy dictation session.

This event is raised when an SpeechRecognitionAudioProblem is detected that might affect recognition accuracy.

This event is raised when a change occurs to the State property during audio capture.

Constructors

  • SpeechRecognizer()
    SpeechRecognizer()
    SpeechRecognizer()
    SpeechRecognizer()

    Creates a new instance of the SpeechRecognizer class.

    public SpeechRecognizer()public New()Public Sub New()public SpeechRecognizer()
  • SpeechRecognizer(Language)
    SpeechRecognizer(Language)
    SpeechRecognizer(Language)
    SpeechRecognizer(Language)

    Creates a new instance of the SpeechRecognizer class with a language specifier.

    Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.

    public SpeechRecognizer(Language language)public New(Language language)Public Sub New(language As Language)public SpeechRecognizer(Language language)

    Parameters

    Remarks

    CurrentLanguage is set to the value of language.

Properties

  • Constraints
    Constraints
    Constraints
    Constraints

    Gets the collection of constraint objects that are associated with the SpeechRecognizer object.

    public IVector<ISpeechRecognitionConstraint> Constraints { get; }public IVector<ISpeechRecognitionConstraint> Constraints { get; }Public ReadOnly Property Constraints As IVector<ISpeechRecognitionConstraint>public IVector<ISpeechRecognitionConstraint> Constraints { get; }

    Property Value

  • ContinuousRecognitionSession
    ContinuousRecognitionSession
    ContinuousRecognitionSession
    ContinuousRecognitionSession

    Gets the continuous recognition session object (SpeechContinuousRecognitionSession ) associated with this SpeechRecognizer.

    public SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }public SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }Public ReadOnly Property ContinuousRecognitionSession As SpeechContinuousRecognitionSessionpublic SpeechContinuousRecognitionSession ContinuousRecognitionSession { get; }

    Property Value

  • CurrentLanguage
    CurrentLanguage
    CurrentLanguage
    CurrentLanguage

    Gets the language used for speech recognition.

    CurrentLanguage is initialized with the value specified in the @Windows.Media.SpeechRecognition.SpeechRecognizer.#ctor(Windows.Globalization.Language) constructor. If no language is specified in the @Windows.Media.SpeechRecognition.SpeechRecognizer.#ctor constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.

    public Language CurrentLanguage { get; }public Language CurrentLanguage { get; }Public ReadOnly Property CurrentLanguage As Languagepublic Language CurrentLanguage { get; }

    Property Value

  • State
    State
    State
    State

    Gets the state of the speech recognizer.

    public SpeechRecognizerState State { get; }public SpeechRecognizerState State { get; }Public ReadOnly Property State As SpeechRecognizerStatepublic SpeechRecognizerState State { get; }

    Property Value

  • SupportedGrammarLanguages
    SupportedGrammarLanguages
    SupportedGrammarLanguages
    SupportedGrammarLanguages

    Gets the collection of languages supported by the custom grammars of the SpeechRecognitionGrammarFileConstraint and SpeechRecognitionListConstraint objects specified in the Constraints property.

    Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar, described here, that is installed with your app (speech recognition using a custom constraint is performed on the device).

    • Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all words in a grammar also improves recognition accuracy, as the speech recognition engine must only process speech to confirm a match. The list can also be programmatically updated.
    • A Speech Recognition Grammar Specification (SRGS) grammar is a static document that, unlike a programmatic list constraint, uses the XML format defined by the Version 1.0. An Speech Recognition Grammar Specification (SRGS) grammar provides the greatest control over the speech recognition experience by letting you capture multiple semantic meanings in a single recognition.
    public static IVectorView<Language> SupportedGrammarLanguages { get; }public static IVectorView<Language> SupportedGrammarLanguages { get; }Public Static ReadOnly Property SupportedGrammarLanguages As IVectorView<Language>public static IVectorView<Language> SupportedGrammarLanguages { get; }

    Property Value

    • The collection of grammar languages.

  • SupportedTopicLanguages
    SupportedTopicLanguages
    SupportedTopicLanguages
    SupportedTopicLanguages

    Gets the collection of languages supported by the pre-defined, web-service grammars of the SpeechRecognitionTopicConstraint objects specified in the Constraints property.

    Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer. You can specify one of the pre-defined, web-service grammars or you can create a custom grammar that is installed with your app.

    Predefined dictation and web-search grammars provide speech recognition for your app without requiring you to author a grammar. When using these grammars, speech recognition is performed by a remote web service and the results are returned to the device

    public static IVectorView<Language> SupportedTopicLanguages { get; }public static IVectorView<Language> SupportedTopicLanguages { get; }Public Static ReadOnly Property SupportedTopicLanguages As IVectorView<Language>public static IVectorView<Language> SupportedTopicLanguages { get; }

    Property Value

    • The collection of languages supported by the pre-defined, web-service grammars.

  • SystemSpeechLanguage
    SystemSpeechLanguage
    SystemSpeechLanguage
    SystemSpeechLanguage

    Gets the speech language of the device specified in Settings > Time & Language > Speech.

    Note

    Speech languages are added from the Settings > Time & Language > Region & language screen.

    1. Click Add a language.
    2. Select a language from the Add a language screen.
    3. Depending on the language selected, a language region screen might be displayed. Select the region.
    4. From the Settings > Time & Language > Region & language screen, select the language and click Options.
    5. If a speech language is available for the selected language and region, a Download button is displayed on the Language options screen. Click this button to download and install the speech language.
    public static Language SystemSpeechLanguage { get; }public static Language SystemSpeechLanguage { get; }Public Static ReadOnly Property SystemSpeechLanguage As Languagepublic static Language SystemSpeechLanguage { get; }

    Property Value

    Remarks

    If no language is specified in the @Windows.Media.SpeechRecognition.SpeechRecognizer.#ctor constructor, CurrentLanguage is initialized with the value of SystemSpeechLanguage.

  • Timeouts
    Timeouts
    Timeouts
    Timeouts

    Gets how long a speech recognizer ignores silence or unrecognizable sounds (babble) and continues listening for speech input.

    public SpeechRecognizerTimeouts Timeouts { get; }public SpeechRecognizerTimeouts Timeouts { get; }Public ReadOnly Property Timeouts As SpeechRecognizerTimeoutspublic SpeechRecognizerTimeouts Timeouts { get; }

    Property Value

  • UIOptions
    UIOptions
    UIOptions
    UIOptions

    Gets the UI settings for the RecognizeWithUIAsync() method.

    public SpeechRecognizerUIOptions UIOptions { get; }public SpeechRecognizerUIOptions UIOptions { get; }Public ReadOnly Property UIOptions As SpeechRecognizerUIOptionspublic SpeechRecognizerUIOptions UIOptions { get; }

    Property Value

Methods

  • Close()
    Close()
    Close()
    Close()

    Disposes the speech recognizer by freeing, releasing, or resetting allocated resources.

    If a SpeechContinuousRecognitionSession is underway, Close() is functionally equivalent to calling CancelAsync().

    public void Close()public void Close()Public Function Close() As voidpublic void Close()
  • CompileConstraintsAsync()
    CompileConstraintsAsync()
    CompileConstraintsAsync()
    CompileConstraintsAsync()

    Asynchronously compile all constraints specified by the Constraints property.

    CompileConstraintsAsync() must always be called before RecognizeAsync() or RecognizeWithUIAsync(), even if no constraints are specified in the Constraints property.

    public IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()public IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()Public Function CompileConstraintsAsync() As IAsyncOperation( Of SpeechRecognitionCompilationResult )public IAsyncOperation<SpeechRecognitionCompilationResult> CompileConstraintsAsync()

    Returns

    Remarks

    This method returns an error if:

  • RecognizeAsync()
    RecognizeAsync()
    RecognizeAsync()
    RecognizeAsync()

    Begins a speech recognition session for a SpeechRecognizer object.

    public IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()public IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()Public Function RecognizeAsync() As IAsyncOperation( Of SpeechRecognitionResult )public IAsyncOperation<SpeechRecognitionResult> RecognizeAsync()

    Returns

    • The result of the speech recognition session that was initiated by the SpeechRecognizer object.

  • RecognizeWithUIAsync()
    RecognizeWithUIAsync()
    RecognizeWithUIAsync()
    RecognizeWithUIAsync()

    Asynchronously starts a speech recognition session that includes additional UI mechanisms, including prompts, examples, text-to-speech (TTS), and confirmations.

    The UI mechanisms supported by RecognizeWithUIAsync() are specified by the UIOptions property.

    public IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()public IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()Public Function RecognizeWithUIAsync() As IAsyncOperation( Of SpeechRecognitionResult )public IAsyncOperation<SpeechRecognitionResult> RecognizeWithUIAsync()

    Returns

  • StopRecognitionAsync()
    StopRecognitionAsync()
    StopRecognitionAsync()
    StopRecognitionAsync()

    Asynchronously ends the speech recognition session.

    public IAsyncAction StopRecognitionAsync()public IAsyncAction StopRecognitionAsync()Public Function StopRecognitionAsync() As IAsyncActionpublic IAsyncAction StopRecognitionAsync()

    Returns

Events

  • HypothesisGenerated
    HypothesisGenerated
    HypothesisGenerated
    HypothesisGenerated

    Occurs during an ongoing dictation session when a recognition result fragment is returned by the speech recognizer.

    The result fragment is useful for demonstrating that speech recognition is processing input during a lengthy dictation session.

    public event TypedEventHandler HypothesisGeneratedpublic event TypedEventHandler HypothesisGeneratedPublic Event HypothesisGeneratedpublic event TypedEventHandler HypothesisGenerated
  • RecognitionQualityDegrading
    RecognitionQualityDegrading
    RecognitionQualityDegrading
    RecognitionQualityDegrading

    This event is raised when an SpeechRecognitionAudioProblem is detected that might affect recognition accuracy.

    public event TypedEventHandler RecognitionQualityDegradingpublic event TypedEventHandler RecognitionQualityDegradingPublic Event RecognitionQualityDegradingpublic event TypedEventHandler RecognitionQualityDegrading
  • StateChanged
    StateChanged
    StateChanged
    StateChanged

    This event is raised when a change occurs to the State property during audio capture.

    public event TypedEventHandler StateChangedpublic event TypedEventHandler StateChangedPublic Event StateChangedpublic event TypedEventHandler StateChanged

Device family

Windows 10 (introduced v10.0.10240.0)

API contract

Windows.Foundation.UniversalApiContract (introduced v1)

Attributes

Windows.Foundation.Metadata.ActivatableAttribute
Windows.Foundation.Metadata.ActivatableAttribute
Windows.Foundation.Metadata.ContractVersionAttribute
Windows.Foundation.Metadata.MarshalingBehaviorAttribute
Windows.Foundation.Metadata.StaticAttribute

Details

Assembly

Windows.Media.SpeechRecognition.dll