IntentRecognizer Class
In addition to performing speech-to-text recognition, the IntentRecognizer extracts structured information about the intent of the speaker.
- Inheritance
-
IntentRecognizer
Constructor
IntentRecognizer(speech_config: azure.cognitiveservices.speech.SpeechConfig, audio_config: Optional[azure.cognitiveservices.speech.audio.AudioConfig] = None, intents: Optional[Iterable[Tuple[Union[str, azure.cognitiveservices.speech.intent.LanguageUnderstandingModel], str]]] = None)
Parameters
- speech_config
The config for the speech recognizer.
- audio_config
The config for the audio input.
- intents
Intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id) to be recognized.
Methods
| add_all_intents |
Adds all intents from the specified Language Understanding Model. |
| add_intent |
Add an intent to the recognizer. There are different ways to do this:
|
| add_intents |
Add intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id). |
| recognize_once |
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
| recognize_once_async |
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
add_all_intents
Adds all intents from the specified Language Understanding Model.
add_all_intents(model: azure.cognitiveservices.speech.intent.LanguageUnderstandingModel)
Parameters
- model
add_intent
Add an intent to the recognizer. There are different ways to do this:
add_intent(simple_phrase): Adds a simple phrase that may be spoken by the user, indicating a specific user intent.
add_intent(simple_phrase, intent_id): Adds a simple phrase that may be spoken by the user, indicating a specific user intent. Once recognized, the result's intent id will match the id supplied here.
add_intent(model, intent_name): Adds a single intent by name from the specified LanguageUnderstandingModel.
add_intent(model, intent_name, intent_id): Adds a single intent by name from the specified LanguageUnderstandingModel.
add_intent(*args)
Parameters
- model
The language understanding model containing the intent.
- intent_name
The name of the single intent to be included from the language understanding model.
- simple_phrase
The phrase corresponding to the intent.
- intent_id
A custom id string to be returned in the IntentRecognitionResult's intent_id property.
add_intents
Add intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id).
add_intents(intents_iter: Iterable[Tuple[Union[str, azure.cognitiveservices.speech.intent.LanguageUnderstandingModel], str]])
Parameters
- intents
Intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id) to be recognized.
recognize_once
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once() -> azure.cognitiveservices.speech.intent.IntentRecognitionResult
Returns
The result value of the synchronous recognition.
recognize_once_async
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once_async() -> azure.cognitiveservices.speech.ResultFuture
Returns
A future containing the result value of the asynchronous recognition.
Attributes
canceled
Signal for events containing canceled recognition results (indicating a recognition attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).
Callbacks connected to this signal are called with a IntentRecognitionCanceledEventArgs, instance as the single argument.
recognized
Signal for events containing final recognition results (indicating a successful recognition attempt).
Callbacks connected to this signal are called with a IntentRecognitionEventArgs, instance as the single argument, dependent on the type of recognizer.
recognizing
Signal for events containing intermediate recognition results.
Callbacks connected to this signal are called with a IntentRecognitionEventArgs, instance as the single argument.
Feedback
Submit and view feedback for