Managing loaded constraints to optimize recognition (XAML)
After a constraint collection is loaded for recognition, your app can manage which constraints are enabled for recognition operations by setting the IsEnabled property of a constraint to true or false. The default setting is true. Typically it's more efficient to load constraints once and then to enable or disable them as needed, rather than load and unload constraints for each recognition operation. It takes less processor resources and time to set the IsEnabled property than to load, unload, and compile a constraint.
You restrict the number of constraints that are enabled for recognition to limit the amount of data that the speech recognizer needs to search to find a match for speech input. This improves performance and accuracy of speech recognition. You can make decisions about which constraints to have enabled based on the phrases that your app expects in the context of the current recognition operation.
For example, if the current app context is to display a color that the user speaks, then you may not need to enable a constraint that recognizes the names of animals. Recognizing names of animals probably has no meaning to your app when it needs to know which color to display.
When you make decisions about which constraints to enable for a recognition operation, make sure that you prepare the user to know what they can say for the recognition operation. This increases the likelihood that the user will speak a phrase that can be matched to an active constraint. You can prompt the user for what can be spoken using the SpeechRecognizerUIOptions.AudiblePrompt and SpeechRecognizerUIOptions.ExampleText properties, which you can set using the SpeechRecognizer.UIOptions property.