CustomAnalyzer Class
Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.
All required parameters must be populated in order to send to Azure.
- Inheritance
-
azure.search.documents.indexes._generated.models._models_py3.LexicalAnalyzerCustomAnalyzer
Constructor
CustomAnalyzer(**kwargs)
Parameters
- odata_type
- str
Required. Identifies the concrete type of the analyzer.Constant filled by server.
- name
- str
Required. The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- tokenizer_name
- str or <xref:azure.search.documents.indexes.models.LexicalTokenizerName>
Required. The name of the tokenizer to use to divide continuous text into a sequence of tokens, such as breaking a sentence into words. Possible values include: "classic", "edgeNGram", "keyword_v2", "letter", "lowercase", "microsoft_language_tokenizer", "microsoft_language_stemming_tokenizer", "nGram", "path_hierarchy_v2", "pattern", "standard_v2", "uax_url_email", "whitespace".
- token_filters
- list[str or TokenFilterName]
A list of token filters used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. The filters are run in the order in which they are listed.
Feedback
Submit and view feedback for