CustomAnalyzer Class

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

All required parameters must be populated in order to send to Azure.

Inheritance
azure.search.documents.indexes._generated.models._models_py3.LexicalAnalyzer
CustomAnalyzer

Constructor

CustomAnalyzer(**kwargs)

Parameters

odata_type
str
Required

Required. Identifies the concrete type of the analyzer.Constant filled by server.

name
str
Required

Required. The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

tokenizer_name
str or <xref:azure.search.documents.indexes.models.LexicalTokenizerName>
Required

Required. The name of the tokenizer to use to divide continuous text into a sequence of tokens, such as breaking a sentence into words. Possible values include: "classic", "edgeNGram", "keyword_v2", "letter", "lowercase", "microsoft_language_tokenizer", "microsoft_language_stemming_tokenizer", "nGram", "path_hierarchy_v2", "pattern", "standard_v2", "uax_url_email", "whitespace".

token_filters
list[str or TokenFilterName]
Required

A list of token filters used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. The filters are run in the order in which they are listed.

char_filters
list[str]
Required

A list of character filters used to prepare input text before it is processed by the tokenizer. For instance, they can replace certain characters or symbols. The filters are run in the order in which they are listed.