PatternTokenizer Class

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

All required parameters must be populated in order to send to Azure.

Inheritance
azure.search.documents.indexes._generated.models._models_py3.LexicalTokenizer
PatternTokenizer

Constructor

PatternTokenizer(**kwargs)

Parameters

name
str
Required

Required. The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

pattern
str
Required

A regular expression to match token separators. Default is an expression that matches one or more white space characters.

flags
list[str] or list[<xref:search_service_client.models.RegexFlags>]
Required

List of regular expression flags. Possible values of each flag include: 'CANON_EQ', 'CASE_INSENSITIVE', 'COMMENTS', 'DOTALL', 'LITERAL', 'MULTILINE', 'UNICODE_CASE', 'UNIX_LINES'.

group
int
Required

The zero-based ordinal of the matching group in the regular expression to extract into tokens. Use -1 if you want to use the entire pattern to split the input into tokens, irrespective of matching groups. Default is -1.