您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

TranslationRecognizer Class

Performs translation on the speech input.

Inheritance
TranslationRecognizer

Constructor

TranslationRecognizer(translation_config: azure.cognitiveservices.speech.translation.SpeechTranslationConfig, audio_config: typing.Union[azure.cognitiveservices.speech.audio.AudioConfig, NoneType] = None)

Parameters

translation_config

The config for the translation recognizer.

audio_config

The config for the audio input.

Methods

add_target_language

Add language to the list of target languages for translation.

Note

Added in version 1.7.0.

recognize_once

Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use azure.cognitiveservices.speech.Recognizer.start_continuous_recognition_async instead.

recognize_once_async

Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use azure.cognitiveservices.speech.Recognizer.start_continuous_recognition_async instead.

remove_target_language

Remove language from the list of target languages for translation.

Note

Added in version 1.7.0.

add_target_language

Add language to the list of target languages for translation.

Note

Added in version 1.7.0.

add_target_language(language: str)

Parameters

language
Required

The language code to add.

recognize_once

Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use azure.cognitiveservices.speech.Recognizer.start_continuous_recognition_async instead.

recognize_once() -> azure.cognitiveservices.speech.translation.TranslationRecognitionResult

Returns

The result value of the synchronous recognition.

recognize_once_async

Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use azure.cognitiveservices.speech.Recognizer.start_continuous_recognition_async instead.

recognize_once_async() -> azure.cognitiveservices.speech.ResultFuture

Returns

A future containing the result value of the asynchronous recognition.

remove_target_language

Remove language from the list of target languages for translation.

Note

Added in version 1.7.0.

remove_target_language(language: str)

Parameters

language
Required

The language code to remove.

Attributes

canceled

Signal for events containing canceled recognition results (indicating a recognition attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).

Callbacks connected to this signal are called with a TranslationRecognitionCanceledEventArgs, instance as the single argument.

recognized

Signal for events containing final recognition results (indicating a successful recognition attempt).

Callbacks connected to this signal are called with a TranslationRecognitionEventArgs, instance as the single argument, dependent on the type of recognizer.

recognizing

Signal for events containing intermediate recognition results.

Callbacks connected to this signal are called with a TranslationRecognitionEventArgs, instance as the single argument.

synthesizing

The event signals that a translation synthesis result is received.

Callbacks connected to this signal are called with a TranslationSynthesisEventArgs instance as the single argument.

target_languages

The target languages for translation.

Note

Added in version 1.7.0.