PropertyId Enum

Definition

Defines property ids. Changed in version 1.8.0.

public enum PropertyId
Inheritance
java.lang.Object
java.lang.Enum<PropertyId>
PropertyId

Fields

AudioConfig_AudioProcessingOptions

Audio processing options in JSON format.

AudioConfig_DeviceNameForRender

The device name for audio render. Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromSpeakerOutput(String deviceName). Added in version 1.17.0

AudioConfig_PlaybackBufferLengthInMs

Playback buffer length in milliseconds, default is 50 milliseconds.

CancellationDetails_Reason

The cancellation reason. Currently unused.

CancellationDetails_ReasonDetailedText

The cancellation detailed text. Currently unused.

CancellationDetails_ReasonText

The cancellation text. Currently unused.

Conversation_ApplicationId

Identifier used to connect to the backend service. Added in version 1.5.0.

Conversation_Connection_Id

Additional identifying information, such as a Direct Line token, used to authenticate with the backend service. Added in version 1.16.0.

Conversation_Conversation_Id

ConversationId for the session. Added in version 1.8.0.

Conversation_Custom_Voice_Deployment_Ids

Comma separated list of custom voice deployment ids. Added in version 1.8.0.

Conversation_DialogType

Type of dialog backend to connect to. Added in version 1.7.0.

Conversation_From_Id

From id to be used on speech recognition activities Added in version 1.5.0.

Conversation_Initial_Silence_Timeout

Silence timeout for listening Added in version 1.5.0.

Conversation_Request_Bot_Status_Messages

A boolean value that specifies whether the client should receive status messages and generate corresponding turnStatusReceived events. Defaults to true. Added in version 1.15.0.

Conversation_Speech_Activity_Template

Speech activity template, stamp properties in the template on the activity generated by the service for speech. Added in version 1.10.0.

DataBuffer_TimeStamp

The time stamp associated to data buffer written by client when using Pull/Push audio mode streams. The time stamp is a 64-bit value with a resolution of 90 kHz. The same as the presentation timestamp in an MPEG transport stream. See https://en.wikipedia.org/wiki/Presentation_timestamp. Added in version 1.5.0.

DataBuffer_UserId

The user id associated to data buffer written by client when using Pull/Push audio mode streams. Added in version 1.5.0.

LanguageUnderstandingServiceResponse_JsonResult

The Language Understanding Service response output (in JSON format). Available via IntentRecognitionResult.Properties.

PronunciationAssessment_EnableMiscue

Defines if enable miscue calculation. With this enabled, the pronounced words will be compared to the reference text, and will be marked with omission/insertion based on the comparison. The default setting is False. Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0

PronunciationAssessment_GradingSystem

The point system for pronunciation score calibration (FivePoint or HundredMark). Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0

PronunciationAssessment_Granularity

The pronunciation evaluation granularity (Phoneme, Word, or FullText). Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0

PronunciationAssessment_Json

The json string of pronunciation assessment parameters Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0

PronunciationAssessment_NBestPhonemeCount

The pronunciation evaluation nbest phoneme count. Under normal circumstances, you shouldn't have to use this property directly. Instead, use setNBestPhonemeCount(int count). Added in version 1.20.0

PronunciationAssessment_Params

Pronunciation assessment parameters. This property is intended to be read-only. The SDK is using it internally. Added in version 1.14.0

PronunciationAssessment_PhonemeAlphabet

The pronunciation evaluation phoneme alphabet. The valid values are "SAPI" (default) and "IPA" Under normal circumstances, you shouldn't have to use this property directly. Instead, use setPhonemeAlphabet(String value). Added in version 1.20.0

PronunciationAssessment_ReferenceText

The reference text of the audio for pronunciation evaluation. For this and the following pronunciation assessment parameters, see https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-speech-to-text#pronunciation-assessment-parameters for details. Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0

SpeakerRecognition_Api_Version

Version of Speaker Recognition to use. Added in version 1.18.0

Speech_LogFilename

The file name to write logs. Added in version 1.4.0.

Speech_SessionId

The session id. This id is a universally unique identifier (aka UUID) representing a specific binding of an audio input stream and the underlying speech recognition instance to which its bound. Under normal circumstances, you shouldn't have to use this property directly. Instead use getSessionId().

SpeechServiceAuthorization_Token

The Cognitive Services Speech Service authorization token (aka access token). Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromAuthorizationToken(String authorizationToken, String region), setAuthorizationToken(String token), IntentRecognizer.setAuthorizationToken, TranslationRecognizer.setAuthorizationToken.

SpeechServiceAuthorization_Type

The Cognitive Services Speech Service authorization type. Currently unused.

SpeechServiceConnection_AutoDetectSourceLanguageResult

The auto detect source language result Added in version 1.8.0.

SpeechServiceConnection_AutoDetectSourceLanguages

The auto detect source languages Added in version 1.8.0.

SpeechServiceConnection_EnableAudioLogging

A boolean value specifying whether audio logging is enabled in the service or not. Added in version 1.5.0

SpeechServiceConnection_Endpoint

The Cognitive Services Speech Service endpoint (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromEndpoint(java.net.URI endpoint, String subscriptionKey). NOTE: This endpoint is not the same as the endpoint used to obtain an access token.

SpeechServiceConnection_EndpointId

The Cognitive Services Custom Speech or Custom Voice Service endpoint id. Under normal circumstances, you shouldn't have to use this property directly. Instead use setEndpointId(String value). NOTE: The endpoint id is available in the Custom Speech Portal, listed under Endpoint Details.

SpeechServiceConnection_EndSilenceTimeoutMs

The end silence timeout value (in milliseconds) used by the service. Added in version 1.5.0

SpeechServiceConnection_Host

The Cognitive Services Speech Service host (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromHost(java.net.URI host, String subscriptionKey).

SpeechServiceConnection_InitialSilenceTimeoutMs

The initial silence timeout value (in milliseconds) used by the service. Added in version 1.5.0

SpeechServiceConnection_IntentRegion

The Language Understanding Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead, use LanguageUnderstandingModel.

SpeechServiceConnection_Key

The Cognitive Services Speech Service subscription key. If you are using an intent recognizer, you need to specify the LUIS endpoint key for your particular LUIS app. Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromSubscription(String subscriptionKey, String region).

SpeechServiceConnection_ProxyHostName

The host name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use setProxy(String proxyHostName, int proxyPort, String proxyUserName, String proxyPassword). NOTE: This property id was added in version 1.1.0.

SpeechServiceConnection_ProxyPassword

The password of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use setProxy(String proxyHostName, int proxyPort, String proxyUserName, String proxyPassword). NOTE: This property id was added in version 1.1.0.

SpeechServiceConnection_ProxyPort

The port of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use setProxy(String proxyHostName, int proxyPort, String proxyUserName, String proxyPassword). NOTE: This property id was added in version 1.1.0.

SpeechServiceConnection_ProxyUserName

The user name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use setProxy(String proxyHostName, int proxyPort, String proxyUserName, String proxyPassword). NOTE: This property id was added in version 1.1.0.

SpeechServiceConnection_RecoBackend

The string to specify the backend to be used for speech recognition; allowed options are online and offline. Under normal circumstances, you shouldn't use this property directly. Currently the offline option is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0

SpeechServiceConnection_RecoLanguage

The spoken language to be recognized (in BCP-47 format). Under normal circumstances, you shouldn't have to use this property directly. Instead, use setSpeechRecognitionLanguage(String value).

SpeechServiceConnection_RecoMode

The Cognitive Services Speech Service recognition mode. Can be "INTERACTIVE", "CONVERSATION", "DICTATION". This property is intended to be read-only. The SDK is using it internally.

SpeechServiceConnection_RecoModelKey

The decryption key of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0

SpeechServiceConnection_RecoModelName

The name of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0

SpeechServiceConnection_Region

The Cognitive Services Speech Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromSubscription(String subscriptionKey, String region), fromEndpoint(java.net.URI endpoint, String subscriptionKey), fromHost(java.net.URI host, String subscriptionKey), fromAuthorizationToken(String authorizationToken, String region).

SpeechServiceConnection_SynthBackend

The string to specify TTS backend; valid options are online and offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromPath(String path) or fromPaths(List<String> paths) to set the synthesis backend to offline. Added in version 1.19.0

SpeechServiceConnection_SynthEnableCompressedAudioTransmission

Indicates if use compressed audio format for speech synthesis audio transmission. This property only affects when SpeechServiceConnection_SynthOutputFormat is set to a pcm format. If this property is not set and GStreamer is available, SDK will use compressed format for synthesized audio transmission, and decode it. You can set this property to "false" to use raw pcm format for transmission on wire. Added in version 1.16.0

SpeechServiceConnection_SynthLanguage

The spoken language to be synthesized (e.g. en-US) Added in version 1.7.0

SpeechServiceConnection_SynthModelKey

The decryption key of the model to be used for speech synthesis. Under normal circumstances, you shouldn't use this property directly. Instead, use setSpeechSynthesisVoice(String name, String key). Added in version 1.19.0

SpeechServiceConnection_SynthOfflineDataPath

The data file path(s) for offline synthesis engine; only valid when synthesis backend is offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, use fromPath(String path) or fromPaths(List<String> paths). Added in version 1.19.0

SpeechServiceConnection_SynthOfflineVoice

The name of the offline TTS voice to be used for speech synthesis. Under normal circumstances, you shouldn't use this property directly. Instead, use setSpeechSynthesisVoice(String name, String key). Added in version 1.19.0

SpeechServiceConnection_SynthOutputFormat

The string to specify TTS output audio format (e.g. riff-16khz-16bit-mono-pcm) Added in version 1.7.0

SpeechServiceConnection_SynthVoice

The name of the TTS voice to be used for speech synthesis Added in version 1.7.0

SpeechServiceConnection_TranslationFeatures

Translation features. For internal use.

SpeechServiceConnection_TranslationToLanguages

The list of comma separated languages (BCP-47 format) used as target translation languages. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechTranslationConfig.addTargetLanguage, SpeechTranslationConfig.getTargetLanguages, TranslationRecognizer.getTargetLanguages.

SpeechServiceConnection_TranslationVoice

The name of the Cognitive Service Text to Speech Service voice. Under normal circumstances, you shouldn't have to use this property directly. Instead use SpeechTranslationConfig.setVoiceName. NOTE: Valid voice names can be found here.

SpeechServiceConnection_Url

The URL string built from speech configuration. This property is intended to be read-only. The SDK is using it internally. NOTE: Added in version 1.5.0.

SpeechServiceConnection_VoicesListEndpoint

The Cognitive Services Speech Service voices list api endpoint (url). Under normal circumstances, you don't need to specify this property, SDK will construct it based on the region/host/endpoint of SpeechConfig. Added in version 1.16.0

SpeechServiceResponse_JsonErrorDetails

The Cognitive Services Speech Service error details (in JSON format). Under normal circumstances, you shouldn't have to use this property directly. Instead, use getErrorDetails().

SpeechServiceResponse_JsonResult

The Cognitive Services Speech Service response output (in JSON format). This property is available on recognition result objects only.

SpeechServiceResponse_OutputFormatOption

A string value specifying the output format option in the response result. Internal use only. Added in version 1.5.0.

SpeechServiceResponse_PostProcessingOption

A string value specifying which post processing option should be used by service. Allowed values are "TrueText". Added in version 1.5.0

SpeechServiceResponse_ProfanityOption

The requested Cognitive Services Speech Service response output profanity setting. Allowed values are "masked", "removed", and "raw". Added in version 1.5.0.

SpeechServiceResponse_RecognitionLatencyMs

The recognition latency in milliseconds. Read-only, available on final speech/translation/intent results. This measures the latency between when an audio input is received by the SDK, and the moment the final result is received from the service. The SDK computes the time difference between the last audio fragment from the audio input that is contributing to the final result, and the time the final result is received from the speech service. Added in version 1.3.0.

SpeechServiceResponse_RequestDetailedResultTrueFalse

The requested Cognitive Services Speech Service response output format (simple or detailed). Under normal circumstances, you shouldn't have to use this property directly. Instead use setOutputFormat(OutputFormat format).

SpeechServiceResponse_RequestProfanityFilterTrueFalse

The requested Cognitive Services Speech Service response output profanity level. Currently unused.

SpeechServiceResponse_RequestSnr

A boolean value specifying whether to include SNR (signal to noise ratio) in the response result. Added in version 1.18.0

SpeechServiceResponse_RequestWordLevelTimestamps

A boolean value specifying whether to include word-level timestamps in the response result. Added in version 1.5.0

SpeechServiceResponse_StablePartialResultThreshold

The number of times a word has to be in partial results to be returned. Added in version 1.5.0

SpeechServiceResponse_SynthesisBackend

Indicates which backend the synthesis is finished by. Read-only, available on speech synthesis results, except for the result in SynthesisStarted event. Added in version 1.17.0.

SpeechServiceResponse_SynthesisFinishLatencyMs

The speech synthesis all bytes latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the whole audio is synthesized. Added in version 1.17.0.

SpeechServiceResponse_SynthesisFirstByteLatencyMs

The speech synthesis first byte latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the first byte audio is available. Added in version 1.17.0.

SpeechServiceResponse_SynthesisUnderrunTimeMs

The underrun time for speech synthesis in milliseconds. Read-only, available on results in SynthesisCompleted events. This measures the total underrun time from AudioConfig_PlaybackBufferLengthInMs is filled to synthesis completed. Added in version 1.17.0.

SpeechServiceResponse_TranslationRequestStablePartialResult

A boolean value to request for stabilizing translation partial results by omitting words in the end. Added in version 1.5.0.

Methods

getValue()

Returns the internal value property id

public int getValue()

Returns

int

the speech property id

Applies to