LuceneStandardAnalyzer Class
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.
All required parameters must be populated in order to send to Azure.
- Inheritance
-
azure.search.documents.indexes._generated.models._models_py3.LexicalAnalyzerLuceneStandardAnalyzer
Constructor
LuceneStandardAnalyzer(*, name: str, max_token_length: Optional[int] = 255, stopwords: Optional[List[str]] = None, **kwargs)
Parameters
- odata_type
- str
Required. Identifies the concrete type of the analyzer.Constant filled by server.
- name
- str
Required. The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- max_token_length
- int
The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
Feedback
Submit and view feedback for