MicrosoftLanguageTokenizer Class

Divides text using language-specific rules.

All required parameters must be populated in order to send to Azure.

Inheritance
azure.search.documents.indexes._generated.models._models_py3.LexicalTokenizer
MicrosoftLanguageTokenizer

Constructor

MicrosoftLanguageTokenizer(*, name: str, max_token_length: Optional[int] = 255, is_search_tokenizer: Optional[bool] = False, language: Optional[Union[str, azure.search.documents.indexes._generated.models._search_client_enums.MicrosoftTokenizerLanguage]] = None, **kwargs)

Parameters

odata_type
str
Required

Required. Identifies the concrete type of the tokenizer.Constant filled by server.

name
str
Required

Required. The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

max_token_length
int
Required

The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.

is_search_tokenizer
bool
Required

A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.

language
str or MicrosoftTokenizerLanguage
Required

The language to use. The default is English. Possible values include: "bangla", "bulgarian", "catalan", "chineseSimplified", "chineseTraditional", "croatian", "czech", "danish", "dutch", "english", "french", "german", "greek", "gujarati", "hindi", "icelandic", "indonesian", "italian", "japanese", "kannada", "korean", "malay", "malayalam", "marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi", "romanian", "russian", "serbianCyrillic", "serbianLatin", "slovenian", "spanish", "swedish", "tamil", "telugu", "thai", "ukrainian", "urdu", "vietnamese".