SpeechRecognizer Class
A speech recognizer. If you need to specify source language information, please only specify one of these three parameters, language, source_language_config or auto_detect_source_language_config.
- Inheritance
-
SpeechRecognizer
Constructor
SpeechRecognizer(speech_config: azure.cognitiveservices.speech.SpeechConfig, audio_config: Optional[azure.cognitiveservices.speech.audio.AudioConfig] = None, language: Optional[str] = None, source_language_config: Optional[azure.cognitiveservices.speech.languageconfig.SourceLanguageConfig] = None, auto_detect_source_language_config: Optional[azure.cognitiveservices.speech.languageconfig.AutoDetectSourceLanguageConfig] = None)
Parameters
- speech_config
The config for the speech recognizer
- audio_config
The config for the audio input
- language
The source language
- source_language_config
The source language config
- auto_detect_source_language_config
The auto detection source language config
Methods
| recognize_once |
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
| recognize_once_async |
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
recognize_once
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once() -> azure.cognitiveservices.speech.SpeechRecognitionResult
Returns
The result value of the synchronous recognition.
recognize_once_async
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once_async() -> azure.cognitiveservices.speech.ResultFuture
Returns
A future containing the result value of the asynchronous recognition.
Attributes
canceled
Signal for events containing canceled recognition results (indicating a recognition attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).
Callbacks connected to this signal are called with a SpeechRecognitionCanceledEventArgs, instance as the single argument.
recognized
Signal for events containing final recognition results (indicating a successful recognition attempt).
Callbacks connected to this signal are called with a SpeechRecognitionEventArgs instance as the single argument, dependent on the type of recognizer.
recognizing
Signal for events containing intermediate recognition results.
Callbacks connected to this signal are called with a SpeechRecognitionEventArgs instance as the single argument.
Feedback
Submit and view feedback for