AudioAnalyzerPreset Class
The Audio Analyzer preset applies a pre-defined set of AI-based analysis operations, including speech transcription. Currently, the preset supports processing of content with a single audio track.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: VideoAnalyzerPreset.
All required parameters must be populated in order to send to Azure.
- Inheritance
-
azure.mgmt.media.models._models_py3.PresetAudioAnalyzerPreset
Constructor
AudioAnalyzerPreset(*, audio_language: Optional[str] = None, mode: Optional[Union[str, _models.AudioAnalysisMode]] = None, experimental_options: Optional[Dict[str, str]] = None, **kwargs)
Variables
- odata_type
- str
Required. The discriminator for derived types.Constant filled by server.
- audio_language
- str
The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463.
- mode
- str or AudioAnalysisMode
Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen. Known values are: "Standard", "Basic".
Feedback
Submit and view feedback for