Adding and compiling constraints (XAML)

Applies to Windows Phone only

A speech recognizer needs at least one constraint, and the constraints must be compiled, before the recognizer can perform recognition. If you don't specify a constraint then the predefined dictation grammar is used, but you still have to compile it.

Providing the speech recognizer with a constraint

To provide the speech recognizer with a constraint, you add a constraint to SpeechRecognizer.Constraints. There are three kinds of constraint that you can add:

Each instance of a speech recognizer has only one constraint collection. An app can add one or more constraints to a speech recognizer, and it must call SpeechRecognizer.CompileConstraintsAsync to compile the constraints, before the recognizer begins the recognition process.

You can add only specific combinations of constraints to a constraint collection, indicated as follows:

  • If the collection contains a constraint based on a predefined grammar (dictation or web search) then that must be the only constraint in the collection.
  • You can mix one or more list constraints and/or grammar file constraints in the same collection (as long as the collection contains no constraint based on a predefined grammar).

Working with a short message dictation grammar

The short message dictation grammar is the default constraint used if none is specified. A code example showing how to begin recognition using this grammar is shown in Quickstart: Speech recognition (XAML).

Working with a web search grammar

Unlike a dictation grammar, which is loaded by default, you must add a web search grammar to the constraints collection. The following example shows how to do this:

private async void ButtonWeatherSearch_Click(object sender, RoutedEventArgs e)
{
    // Create an instance of SpeechRecognizer.
    this.speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

    // Add a web search grammar to the recognizer.
    var webSearchGrammar = new Windows.Media.SpeechRecognition.SpeechRecognitionTopicConstraint(Windows.Media.SpeechRecognition.SpeechRecognitionScenario.WebSearch, "webSearch");

    speechRecognizer.UIOptions.AudiblePrompt = "Say what you want to search for...";
    speechRecognizer.UIOptions.ExampleText = @"Ex. 'weather for London'";
    speechRecognizer.Constraints.Add(webSearchGrammar);

    // Compile the constraint.
    await speechRecognizer.CompileConstraintsAsync();

    // Start recognition.
    Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await this.speechRecognizer.RecognizeWithUIAsync();

    // Do something with the recognition result.
    var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
    await messageDialog.ShowAsync();
}

Working with a programmatic list constraint

The following example shows how to constrain recognition to the items in an array of words or phrases:

private async void ButtonResponse_Click(object sender, RoutedEventArgs e)
{
    // Create an instance of SpeechRecognizer.
    this.speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

    // You could create this array dynamically.
    string[] responses = { "Yes", "No" };

    // Add a list constraint to the recognizer.
    var listConstraint = new Windows.Media.SpeechRecognition.SpeechRecognitionListConstraint(responses, "yesOrNo");

    speechRecognizer.UIOptions.ExampleText = @"Ex. 'Yes', 'No'";
    speechRecognizer.Constraints.Add(listConstraint);

    // Compile the constraint.
    await speechRecognizer.CompileConstraintsAsync();

    // Start recognition.
    Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await this.speechRecognizer.RecognizeWithUIAsync();

    // Do something with the recognition result.
    var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
    await messageDialog.ShowAsync();
}

Keep the following points in mind:

  • You can add multiple list constraints to a speech recognizer's constraints collection.
  • You can use any collection that implements IEnumerable<string> for the string values.

Working with an SRGS grammar

To add an SRGS grammar to a speech recognizer, load an XML file containing the grammar and use that to create the constraint. The following example assumes that an SRGS grammar has been added to the project inside a file named Colors.grxml with Build action set to Content and Copy to output directory set to Copy if newer:

private async void ButtonColors_Click(object sender, RoutedEventArgs e)
{
    // Create an instance of SpeechRecognizer.
    this.speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

    // Add a grammar file constraint to the recognizer.
    var storageFile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Colors.grxml"));
    var grammarfileConstraint = new Windows.Media.SpeechRecognition.SpeechRecognitionGrammarFileConstraint(storageFile, "colors");

    speechRecognizer.UIOptions.ExampleText = @"Ex. 'blue background', 'green text'";
    speechRecognizer.Constraints.Add(grammarfileConstraint);

    // Compile the constraint.
    await speechRecognizer.CompileConstraintsAsync();

    // Start recognition.
    Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await this.speechRecognizer.RecognizeWithUIAsync();

    // Do something with the recognition result.
    var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
    await messageDialog.ShowAsync();
}

Keep the following points in mind:

  • You can add multiple grammar file constraints to a speech recognizer's constraints collection.
  • An accepted convention is to use the .grxml file extension for XML-based grammar documents that conform to SRGS rules.

If you load multiple grammars loaded into a speech recognizer's constraints collection, then you can selectively enable and disable the constraints as users navigate through your app. This ensures that your app listens only for what is pertinent to the current app context. For more info, see Managing loaded constraints to optimize recognition (XAML).