Share via


KeywordTokenizer Class

Definition

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

public class KeywordTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
type KeywordTokenizer = class
    inherit LexicalTokenizer
Public Class KeywordTokenizer
Inherits LexicalTokenizer
Inheritance
KeywordTokenizer

Constructors

KeywordTokenizer(String)

Initializes a new instance of KeywordTokenizer.

Properties

BufferSize

The read buffer size in bytes. Default is 256. Setting this property on new instances of KeywordTokenizer may result in an error when sending new requests to the Azure Cognitive Search service.

MaxTokenLength

The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.

Name

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

(Inherited from LexicalTokenizer)

Applies to