@azure/search-documents package

Classes

AzureKeyCredential

A static-key-based credential that supports updating the underlying key value.

GeographyPoint

Represents a geographic point in global coordinates.

IndexDocumentsBatch

Class used to perform batch operations with multiple documents to the index.

SearchClient

Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them.

SearchIndexClient

Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps.

SearchIndexerClient

Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets.

SearchIndexingBufferedSender

Class used to perform buffered operations against a search index, including adding, updating, and removing them.

Interfaces

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzedTokenInfo

Information about a token returned by an analyzer.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

AutocompleteItem

The result of Autocomplete requests.

AutocompleteRequest

Parameters for fuzzy matching, and other autocomplete query behaviors.

AutocompleteResult

The result of Autocomplete query.

AzureActiveDirectoryApplicationCredentials

Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.

BM25Similarity

Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter).

BaseCharFilter

Base type for character filters.

BaseCognitiveServicesAccount

Base type for describing any Azure AI service resource attached to a skillset.

BaseDataChangeDetectionPolicy

Base type for data change detection policies.

BaseDataDeletionDetectionPolicy

Base type for data deletion detection policies.

BaseLexicalAnalyzer

Base type for analyzers.

BaseLexicalTokenizer

Base type for tokenizers.

BaseScoringFunction

Base type for functions that can modify document scores during ranking.

BaseSearchIndexerSkill

Base type for skills.

BaseSearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

BaseTokenFilter

Base type for token filters.

BaseVectorQuery

The query parameters for vector and hybrid search queries.

BaseVectorSearchAlgorithmConfiguration

Contains configuration options specific to the algorithm used during indexing and/or querying.

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.

ClassicSimilarity

Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries.

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CognitiveServicesAccountKey

An Azure AI service resource provisioned with a key that is attached to a skillset.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

ComplexField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

ConditionalSkill

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index.

CreateOrUpdateIndexOptions

Options for create/update index operation.

CreateOrUpdateSkillsetOptions

Options for create/update skillset operation.

CreateOrUpdateSynonymMapOptions

Options for create/update synonymmap operation.

CreateorUpdateDataSourceConnectionOptions

Options for create/update datasource operation.

CreateorUpdateIndexerOptions

Options for create/update indexer operation.

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

CustomEntity

An object that contains information about the matches that were found, and related metadata.

CustomEntityAlias

A complex object that can be used to specify alternative spellings or synonyms to the root entity name.

CustomEntityLookupSkill

A skill looks for text from a custom, user-defined list of words and phrases.

DefaultCognitiveServicesAccount

An empty object that represents the default Azure AI service resource for a skillset.

DeleteDataSourceConnectionOptions

Options for delete datasource operation.

DeleteIndexOptions

Options for delete index operation.

DeleteIndexerOptions

Options for delete indexer operation.

DeleteSkillsetOptions

Options for delete skillset operaion.

DeleteSynonymMapOptions

Options for delete synonymmap operation.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location.

DistanceScoringParameters

Provides parameter values to a distance scoring function.

DocumentExtractionSkill

A skill that extracts content from a file within the enrichment pipeline.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

EntityLinkingSkill

Using the Text Analytics API, extracts linked entities from text.

EntityRecognitionSkill

Text analytics entity recognition.

EntityRecognitionSkillV3

Using the Text Analytics API, extracts entities of different types from text.

ExhaustiveKnnParameters

Contains the parameters specific to exhaustive KNN algorithm.

ExtractiveQueryAnswer

Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language.

ExtractiveQueryCaption

Extracts captions from the matching documents that contain passages relevant to the search query.

FacetResult

A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.

FieldMapping

Defines a mapping between a field in a data source and a target field in an index.

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing.

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field.

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

GetDocumentOptions

Options for retrieving a single document.

HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

HnswParameters

Contains the parameters specific to hnsw algorithm.

ImageAnalysisSkill

A skill that analyzes image files. It extracts a rich set of visual features based on the image content.

IndexDocumentsClient

Index Documents Client

IndexDocumentsOptions

Options for the modify index batch operation.

IndexDocumentsResult

Response containing the status of operations for all documents in the indexing request.

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersConfiguration

A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

InputFieldMappingEntry

Input field mapping for a skill.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeyPhraseExtractionSkill

A skill that uses text analytics for key phrase extraction.

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene.

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

LanguageDetectionSkill

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

ListSearchResultsPageSettings

Arguments for retrieving the next page of search results.

LuceneStandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

LuceneStandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field.

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MergeSkill

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

OcrSkill

A skill that extracts text from image files.

OutputFieldMappingEntry

Output field mapping for a skill.

PIIDetectionSkill

Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it.

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

QueryAnswerResult

An answer is a text passage extracted from the contents of the most relevant documents that matched the query. Answers are extracted from the top search results. Answer candidates are scored and the top answers are selected.

QueryCaptionResult

Captions are the most representative passages from the document relatively to the search query. They are often used as document summary. Captions are only returned for queries of type semantic.

ResourceCounter

Represents a resource's usage and quota.

ScoringProfile

Defines parameters for a search index that influence scoring in search queries.

SearchClientOptions

Client options used to configure Cognitive Search API requests.

SearchDocumentsPageResult

Response containing search page results from an index.

SearchDocumentsResult

Response containing search results from an index.

SearchDocumentsResultBase

Response containing search results from an index.

SearchIndex

Represents a search index definition, which describes the fields and search behavior of an index.

SearchIndexClientOptions

Client options used to configure Cognitive Search API requests.

SearchIndexStatistics

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

SearchIndexer

Represents an indexer.

SearchIndexerClientOptions

Client options used to configure Cognitive Search API requests.

SearchIndexerDataContainer

Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.

SearchIndexerDataSourceConnection

Represents a datasource definition, which can be used to configure an indexer.

SearchIndexerError

Represents an item- or document-level indexing error.

SearchIndexerKnowledgeStore

Definition of additional projections to azure blob, table, or files, of enriched data.

SearchIndexerKnowledgeStoreBlobProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreFileProjectionSelector

Projection definition for what data to store in Azure Files.

SearchIndexerKnowledgeStoreObjectProjectionSelector

Projection definition for what data to store in Azure Blob.

SearchIndexerKnowledgeStoreProjection

Container object for various projection selectors.

SearchIndexerKnowledgeStoreProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreTableProjectionSelector

Description for what data to store in Azure Tables.

SearchIndexerLimits
SearchIndexerSkillset

A list of skills.

SearchIndexerStatus

Represents the current status and execution history of an indexer.

SearchIndexerWarning

Represents an item-level warning.

SearchIndexingBufferedSenderOptions

Options for SearchIndexingBufferedSender.

SearchResourceEncryptionKey

A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure Cognitive Search, such as indexes and synonym maps.

SearchServiceStatistics

Response from a get service statistics request. If successful, it includes service level counters and limits.

SearchSuggester

Defines how the Suggest API should apply to a group of fields in the index.

SemanticConfiguration

Defines a specific configuration to be used in the context of semantic capabilities.

SemanticField

A field that is used as part of the semantic configuration.

SemanticPrioritizedFields

Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers.

SemanticSearch

Defines parameters for a search index that influence semantic capabilities.

SemanticSearchOptions

Defines options for semantic search queries

SentimentSkill

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.

SentimentSkillV3

Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level.

ServiceCounters

Represents service-level resource counters and quotas.

ServiceLimits

Represents various service level limits.

ShaperSkill

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

Similarity

Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results.

SimpleField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SplitSkill

A skill to split a string into chunks of text.

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene.

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene.

SuggestDocumentsResult

Response containing suggestion query results from an index.

SuggestRequest

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SynonymMap

Represents a synonym map definition.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags.

TagScoringParameters

Provides parameter values to a tag scoring function.

TextTranslationSkill

A skill to translate text from one language to another.

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

VectorSearch

Contains configuration options related to vector search.

VectorSearchOptions

Defines options for vector search queries

VectorSearchProfile

Defines a combination of configurations to use with vector search.

VectorizedQuery

The query parameters to use for vector search when a raw vector value is provided.

WebApiSkill

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Type Aliases

AnalyzeTextOptions

Options for analyze text operation.

AutocompleteMode

Defines values for AutocompleteMode.

AutocompleteOptions

Options for retrieving completion text for a partial searchText.

BlobIndexerDataToExtract

Known values supported by the service

storageMetadata: Indexes just the standard blob properties and user-specified metadata. allMetadata: Extracts metadata provided by the Azure blob storage subsystem and the content-type specific metadata (for example, metadata unique to just .png files are indexed). contentAndMetadata: Extracts all metadata and textual content from each blob.

BlobIndexerImageAction

Known values supported by the service

none: Ignores embedded images or image files in the data set. This is the default. generateNormalizedImages: Extracts text from images (for example, the word "STOP" from a traffic stop sign), and embeds it into the content field. This action requires that "dataToExtract" is set to "contentAndMetadata". A normalized image refers to additional processing resulting in uniform image output, sized and rotated to promote consistent rendering when you include images in visual search results. This information is generated for each image when you use this option. generateNormalizedImagePerPage: Extracts text from images (for example, the word "STOP" from a traffic stop sign), and embeds it into the content field, but treats PDF files differently in that each page will be rendered as an image and normalized accordingly, instead of extracting embedded images. Non-PDF file types will be treated the same as if "generateNormalizedImages" was set.

BlobIndexerPDFTextRotationAlgorithm

Known values supported by the service

none: Leverages normal text extraction. This is the default. detectAngles: May produce better and more readable text extraction from PDF files that have rotated text within them. Note that there may be a small performance speed impact when this parameter is used. This parameter only applies to PDF files, and only to PDFs with embedded text. If the rotated text appears within an embedded image in the PDF, this parameter does not apply.

BlobIndexerParsingMode

Known values supported by the service

default: Set to default for normal file processing. text: Set to text to improve indexing performance on plain text files in blob storage. delimitedText: Set to delimitedText when blobs are plain CSV files. json: Set to json to extract structured content from JSON files. jsonArray: Set to jsonArray to extract individual elements of a JSON array as separate documents in Azure Cognitive Search. jsonLines: Set to jsonLines to extract individual JSON entities, separated by a new line, as separate documents in Azure Cognitive Search.

CharFilter

Contains the possible cases for CharFilter.

CharFilterName

Defines values for CharFilterName.
<xref:KnownCharFilterName> can be used interchangeably with CharFilterName, this enum contains the known values that the service supports.

Known values supported by the service

html_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html

CjkBigramTokenFilterScripts

Defines values for CjkBigramTokenFilterScripts.

CognitiveServicesAccount

Contains the possible cases for CognitiveServicesAccount.

ComplexDataType

Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)'

CountDocumentsOptions

Options for performing the count operation on the index.

CreateDataSourceConnectionOptions

Options for create datasource operation.

CreateIndexOptions

Options for create index operation.

CreateIndexerOptions

Options for create indexer operation.

CreateSkillsetOptions

Options for create skillset operation.

CreateSynonymMapOptions

Options for create synonymmap operation.

CustomEntityLookupSkillLanguage

Defines supported languages for CustomEntityLookupSkill KnownCustomEntityLookupSkillLanguage can be used interchangeably with this type

DataChangeDetectionPolicy

Contains the possible cases for DataChangeDetectionPolicy.

DataDeletionDetectionPolicy

Contains the possible cases for DataDeletionDetectionPolicy.

DeleteDocumentsOptions

Options for the delete documents operation.

EdgeNGramTokenFilterSide

Defines values for EdgeNGramTokenFilterSide.

EntityCategory
EntityRecognitionSkillLanguage

Defines supported languages for EntityRecognitionSkill KnownEntityRecognitionSkillLanguage can be used interchangeably with this type

ExcludedODataTypes
ExhaustiveKnnAlgorithmConfiguration

Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index.

ExtractDocumentKey
GetDataSourceConnectionOptions

Options for get datasource operation.

GetIndexOptions

Options for get index operation.

GetIndexStatisticsOptions

Options for get index statistics operation.

GetIndexerOptions

Options for get indexer operation.

GetIndexerStatusOptions

Options for get indexer status operation.

GetServiceStatisticsOptions

Options for get service statistics operation.

GetSkillSetOptions

Options for get skillset operation.

GetSynonymMapsOptions

Options for get synonymmaps operation.

HnswAlgorithmConfiguration

Contains configuration options specific to the hnsw approximate nearest neighbors algorithm used during indexing time.

ImageAnalysisSkillLanguage

Defines supported languages for ImageAnalysisSkill KnownImageAnalysisSkillLanguage can be used interchangeably with this type

ImageDetail
IndexActionType

Defines values for IndexActionType.

IndexDocumentsAction

Represents an index action that operates on a document.

IndexIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexNameIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexerExecutionEnvironment

Known values supported by the service

standard: Indicates that Azure Cognitive Search can determine where the indexer should execute. This is the default environment when nothing is specified and is the recommended value. private: Indicates that the indexer should run with the environment provisioned specifically for the search service. This should only be specified as the execution environment if the indexer needs to access resources securely over shared private link resources.

IndexerExecutionStatus

Defines values for IndexerExecutionStatus.

IndexerStatus

Defines values for IndexerStatus.

KeyPhraseExtractionSkillLanguage

Defines supported languages for KeyPhraseExtractionSkill KnownKeyPhraseExtractionSkillLanguage can be used interchangeably with this type

LexicalAnalyzer

Contains the possible cases for Analyzer.

LexicalAnalyzerName

Defines values for LexicalAnalyzerName.
<xref:KnownLexicalAnalyzerName> can be used interchangeably with LexicalAnalyzerName, this enum contains the known values that the service supports.

Known values supported by the service

ar.microsoft: Microsoft analyzer for Arabic.
ar.lucene: Lucene analyzer for Arabic.
hy.lucene: Lucene analyzer for Armenian.
bn.microsoft: Microsoft analyzer for Bangla.
eu.lucene: Lucene analyzer for Basque.
bg.microsoft: Microsoft analyzer for Bulgarian.
bg.lucene: Lucene analyzer for Bulgarian.
ca.microsoft: Microsoft analyzer for Catalan.
ca.lucene: Lucene analyzer for Catalan.
zh-Hans.microsoft: Microsoft analyzer for Chinese (Simplified).
zh-Hans.lucene: Lucene analyzer for Chinese (Simplified).
zh-Hant.microsoft: Microsoft analyzer for Chinese (Traditional).
zh-Hant.lucene: Lucene analyzer for Chinese (Traditional).
hr.microsoft: Microsoft analyzer for Croatian.
cs.microsoft: Microsoft analyzer for Czech.
cs.lucene: Lucene analyzer for Czech.
da.microsoft: Microsoft analyzer for Danish.
da.lucene: Lucene analyzer for Danish.
nl.microsoft: Microsoft analyzer for Dutch.
nl.lucene: Lucene analyzer for Dutch.
en.microsoft: Microsoft analyzer for English.
en.lucene: Lucene analyzer for English.
et.microsoft: Microsoft analyzer for Estonian.
fi.microsoft: Microsoft analyzer for Finnish.
fi.lucene: Lucene analyzer for Finnish.
fr.microsoft: Microsoft analyzer for French.
fr.lucene: Lucene analyzer for French.
gl.lucene: Lucene analyzer for Galician.
de.microsoft: Microsoft analyzer for German.
de.lucene: Lucene analyzer for German.
el.microsoft: Microsoft analyzer for Greek.
el.lucene: Lucene analyzer for Greek.
gu.microsoft: Microsoft analyzer for Gujarati.
he.microsoft: Microsoft analyzer for Hebrew.
hi.microsoft: Microsoft analyzer for Hindi.
hi.lucene: Lucene analyzer for Hindi.
hu.microsoft: Microsoft analyzer for Hungarian.
hu.lucene: Lucene analyzer for Hungarian.
is.microsoft: Microsoft analyzer for Icelandic.
id.microsoft: Microsoft analyzer for Indonesian (Bahasa).
id.lucene: Lucene analyzer for Indonesian.
ga.lucene: Lucene analyzer for Irish.
it.microsoft: Microsoft analyzer for Italian.
it.lucene: Lucene analyzer for Italian.
ja.microsoft: Microsoft analyzer for Japanese.
ja.lucene: Lucene analyzer for Japanese.
kn.microsoft: Microsoft analyzer for Kannada.
ko.microsoft: Microsoft analyzer for Korean.
ko.lucene: Lucene analyzer for Korean.
lv.microsoft: Microsoft analyzer for Latvian.
lv.lucene: Lucene analyzer for Latvian.
lt.microsoft: Microsoft analyzer for Lithuanian.
ml.microsoft: Microsoft analyzer for Malayalam.
ms.microsoft: Microsoft analyzer for Malay (Latin).
mr.microsoft: Microsoft analyzer for Marathi.
nb.microsoft: Microsoft analyzer for Norwegian (Bokmål).
no.lucene: Lucene analyzer for Norwegian.
fa.lucene: Lucene analyzer for Persian.
pl.microsoft: Microsoft analyzer for Polish.
pl.lucene: Lucene analyzer for Polish.
pt-BR.microsoft: Microsoft analyzer for Portuguese (Brazil).
pt-BR.lucene: Lucene analyzer for Portuguese (Brazil).
pt-PT.microsoft: Microsoft analyzer for Portuguese (Portugal).
pt-PT.lucene: Lucene analyzer for Portuguese (Portugal).
pa.microsoft: Microsoft analyzer for Punjabi.
ro.microsoft: Microsoft analyzer for Romanian.
ro.lucene: Lucene analyzer for Romanian.
ru.microsoft: Microsoft analyzer for Russian.
ru.lucene: Lucene analyzer for Russian.
sr-cyrillic.microsoft: Microsoft analyzer for Serbian (Cyrillic).
sr-latin.microsoft: Microsoft analyzer for Serbian (Latin).
sk.microsoft: Microsoft analyzer for Slovak.
sl.microsoft: Microsoft analyzer for Slovenian.
es.microsoft: Microsoft analyzer for Spanish.
es.lucene: Lucene analyzer for Spanish.
sv.microsoft: Microsoft analyzer for Swedish.
sv.lucene: Lucene analyzer for Swedish.
ta.microsoft: Microsoft analyzer for Tamil.
te.microsoft: Microsoft analyzer for Telugu.
th.microsoft: Microsoft analyzer for Thai.
th.lucene: Lucene analyzer for Thai.
tr.microsoft: Microsoft analyzer for Turkish.
tr.lucene: Lucene analyzer for Turkish.
uk.microsoft: Microsoft analyzer for Ukrainian.
ur.microsoft: Microsoft analyzer for Urdu.
vi.microsoft: Microsoft analyzer for Vietnamese.
standard.lucene: Standard Lucene analyzer.
standardasciifolding.lucene: Standard ASCII Folding Lucene analyzer. See https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#Analyzers
keyword: Treats the entire content of a field as a single token. This is useful for data like zip codes, ids, and some product names. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html
pattern: Flexibly separates text into terms via a regular expression pattern. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html
simple: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/SimpleAnalyzer.html
stop: Divides text at non-letters; Applies the lowercase and stopword token filters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html
whitespace: An analyzer that uses the whitespace tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceAnalyzer.html

LexicalTokenizer

Contains the possible cases for Tokenizer.

LexicalTokenizerName

Defines values for LexicalTokenizerName.
<xref:KnownLexicalTokenizerName> can be used interchangeably with LexicalTokenizerName, this enum contains the known values that the service supports.

Known values supported by the service

classic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html
edgeNGram: Tokenizes the input from an edge into n-grams of the given size(s). See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html
keyword_v2: Emits the entire input as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html
letter: Divides text at non-letters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html
lowercase: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html
microsoft_language_tokenizer: Divides text using language-specific rules.
microsoft_language_stemming_tokenizer: Divides text using language-specific rules and reduces words to their base forms.
nGram: Tokenizes the input into n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html
path_hierarchy_v2: Tokenizer for path-like hierarchies. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html
pattern: Tokenizer that uses regex pattern matching to construct distinct tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html
standard_v2: Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html
uax_url_email: Tokenizes urls and emails as one token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html
whitespace: Divides text at whitespace. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html

ListDataSourceConnectionsOptions

Options for a list data sources operation.

ListIndexersOptions

Options for a list indexers operation.

ListIndexesOptions

Options for a list indexes operation.

ListSkillsetsOptions

Options for a list skillsets operation.

ListSynonymMapsOptions

Options for a list synonymMaps operation.

MergeDocumentsOptions

Options for the merge documents operation.

MergeOrUploadDocumentsOptions

Options for the merge or upload documents operation.

MicrosoftStemmingTokenizerLanguage

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

Defines values for MicrosoftTokenizerLanguage.

NarrowedModel

Narrows the Model type to include only the selected Fields

OcrSkillLanguage

Defines supported languages for OcrSkill KnownOcrSkillLanguage can be used interchangeably with this type

PIIDetectionSkillMaskingMode

Defines values for PIIDetectionSkillMaskingMode.

Known values supported by the service

none: No masking occurs and the maskedText output will not be returned. replace: Replaces the detected entities with the character given in the maskingCharacter parameter. The character will be repeated to the length of the detected entity so that the offsets will correctly correspond to both the input text as well as the output maskedText.

PhoneticEncoder

Defines values for PhoneticEncoder.

QueryAnswer

A value that specifies whether answers should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set to extractive, the query returns answers extracted from key passages in the highest ranked documents.

QueryCaption

A value that specifies whether captions should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set, the query returns captions extracted from key passages in the highest ranked documents. When Captions is 'extractive', highlighting is enabled by default. Defaults to 'none'.

QueryType

Defines values for QueryType.

RegexFlags

Defines flags for regex pattern matching

Known values supported by the service

CANON_EQ: Enables canonical equivalence. CASE_INSENSITIVE: Enables case-insensitive matching. COMMENTS: Permits whitespace and comments in the pattern. DOTALL: Enables dotall mode. LITERAL: Enables literal parsing of the pattern. MULTILINE: Enables multiline mode. UNICODE_CASE: Enables Unicode-aware case folding. UNIX_LINES: Enables Unix lines mode.

ResetIndexerOptions

Options for reset indexer operation.

RunIndexerOptions

Options for run indexer operation.

ScoringFunction

Contains the possible cases for ScoringFunction.

ScoringFunctionAggregation

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

Defines values for ScoringFunctionInterpolation.

ScoringStatistics

Defines values for ScoringStatistics.

SearchField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SearchFieldArray

If TModel is an untyped object, an untyped string array Otherwise, the slash-delimited fields of TModel.

SearchFieldDataType

Defines values for SearchFieldDataType. Possible values include: 'Edm.String', 'Edm.Int32', 'Edm.Int64', 'Edm.Double', 'Edm.Boolean', 'Edm.DateTimeOffset', 'Edm.GeographyPoint', 'Collection(Edm.String)', 'Collection(Edm.Int32)', 'Collection(Edm.Int64)', 'Collection(Edm.Double)', 'Collection(Edm.Boolean)', 'Collection(Edm.DateTimeOffset)', 'Collection(Edm.GeographyPoint)', 'Collection(Edm.Single)'

NB: Edm.Single alone is not a valid data type. It must be used as part of a collection type.

SearchIndexerDataSourceType
SearchIndexerSkill

Contains the possible cases for Skill.

SearchIndexingBufferedSenderDeleteDocumentsOptions

Options for SearchIndexingBufferedSenderDeleteDocuments.

SearchIndexingBufferedSenderFlushDocumentsOptions

Options for SearchIndexingBufferedSenderFlushDocuments.

SearchIndexingBufferedSenderMergeDocumentsOptions

Options for SearchIndexingBufferedSenderMergeDocuments.

SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions

Options for SearchIndexingBufferedSenderMergeOrUploadDocuments.

SearchIndexingBufferedSenderUploadDocumentsOptions

Options for SearchIndexingBufferedSenderUploadDocuments.

SearchIterator

An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

SearchMode

Defines values for SearchMode.

SearchOptions

Options for committing a full search request.

SearchPick

Deeply pick fields of T using valid Cognitive Search OData $select paths.

SearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestQueryTypeOptions
SearchResult

Contains a document found by a search query, plus associated metadata.

SelectArray

If TFields is never, an untyped string array Otherwise, a narrowed Fields[] type to be used elsewhere in the consuming type.

SelectFields

Produces a union of valid Cognitive Search OData $select paths for T using a post-order traversal of the field tree rooted at T.

SemanticErrorMode

partial: If the semantic processing fails, partial results still return. The definition of partial results depends on what semantic step failed and what was the reason for failure.

fail: If there is an exception during the semantic processing step, the query will fail and return the appropriate HTTP code depending on the error.

SemanticErrorReason

maxWaitExceeded: If 'semanticMaxWaitInMilliseconds' was set and the semantic processing duration exceeded that value. Only the base results were returned.

capacityOverloaded: The request was throttled. Only the base results were returned.

transient: At least one step of the semantic process failed.

SemanticSearchResultsType

baseResults: Results without any semantic enrichment or reranking.

rerankedResults: Results have been reranked with the reranker model and will include semantic captions. They will not include any answers, answers highlights or caption highlights.

SentimentSkillLanguage

Defines supported languages for SentimentSkill KnownSentimentSkillLanguage can be used interchangeably with this type

SimilarityAlgorithm

Contains the possible cases for Similarity.

SnowballTokenFilterLanguage

Defines values for SnowballTokenFilterLanguage.

SplitSkillLanguage

Defines supported languages for SplitSkill KnownSplitSkillLanguage can be used interchangeably with this type

StemmerTokenFilterLanguage

Defines values for StemmerTokenFilterLanguage.

StopwordsList

Defines values for StopwordsList.

SuggestNarrowedModel
SuggestOptions

Options for retrieving suggestions based on the searchText.

SuggestResult

A result containing a document found by a suggestion query, plus associated metadata.

TextSplitMode
TextTranslationSkillLanguage

Defines supported languages for TextTranslationSkill KnownTextTranslationSkillLanguage can be used interchangeably with this type

TokenCharacterKind

Defines values for TokenCharacterKind.

TokenFilter

Contains the possible cases for TokenFilter.

TokenFilterName

Defines values for TokenFilterName.
<xref:KnownTokenFilterName> can be used interchangeably with TokenFilterName, this enum contains the known values that the service supports.

Known values supported by the service

arabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html
apostrophe: Strips all characters after an apostrophe (including the apostrophe itself). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/tr/ApostropheFilter.html
asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
cjk_bigram: Forms bigrams of CJK terms that are generated from the standard tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html
cjk_width: Normalizes CJK width differences. Folds fullwidth ASCII variants into the equivalent basic Latin, and half-width Katakana variants into the equivalent Kana. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html
classic: Removes English possessives, and dots from acronyms. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicFilter.html
common_grams: Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html
edgeNGram_v2: Generates n-grams of the given size(s) starting from the front or the back of an input token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
german_normalization: Normalizes German characters according to the heuristics of the German2 snowball algorithm. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html
hindi_normalization: Normalizes text in Hindi to remove some differences in spelling variations. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html
indic_normalization: Normalizes the Unicode representation of text in Indian languages. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html
keyword_repeat: Emits each incoming token twice, once as keyword and once as non-keyword. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html
kstem: A high-performance kstem filter for English. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/en/KStemFilter.html
length: Removes words that are too long or too short. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html
limit: Limits the number of tokens while indexing. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html
lowercase: Normalizes token text to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.htm
nGram_v2: Generates n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html
persian_normalization: Applies normalization for Persian. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html
phonetic: Create tokens for phonetic matches. See https://lucene.apache.org/core/4_10_3/analyzers-phonetic/org/apache/lucene/analysis/phonetic/package-tree.html
porter_stem: Uses the Porter stemming algorithm to transform the token stream. See http://tartarus.org/~martin/PorterStemmer
reverse: Reverses the token string. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
scandinavian_normalization: Normalizes use of the interchangeable Scandinavian characters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html
scandinavian_folding: Folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o. It also discriminates against use of double vowels aa, ae, ao, oe and oo, leaving just the first one. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html
shingle: Creates combinations of tokens as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html
snowball: A filter that stems words using a Snowball-generated stemmer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html
sorani_normalization: Normalizes the Unicode representation of Sorani text. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html
stemmer: Language specific stemming filter. See https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters
stopwords: Removes stop words from a token stream. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html
trim: Trims leading and trailing whitespace from tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html
truncate: Truncates the terms to a specific length. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html
unique: Filters out tokens with same text as the previous token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html
uppercase: Normalizes token text to upper case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html
word_delimiter: Splits words into subwords and performs optional transformations on subword groups.

UnionToIntersection
UploadDocumentsOptions

Options for the upload documents operation.

VectorFilterMode

Determines whether or not filters are applied before or after the vector search is performed.

VectorQuery

The query parameters for vector and hybrid search queries.

VectorQueryKind
VectorSearchAlgorithmConfiguration

Contains configuration options specific to the algorithm used during indexing and/or querying.

VectorSearchAlgorithmKind
VectorSearchAlgorithmMetric

The similarity metric to use for vector comparisons.

VisualFeature

Enums

KnownAnalyzerNames

Defines values for AnalyzerName. See https://docs.microsoft.com/rest/api/searchservice/Language-support

KnownBlobIndexerDataToExtract

Known values of BlobIndexerDataToExtract that the service accepts.

KnownBlobIndexerImageAction

Known values of BlobIndexerImageAction that the service accepts.

KnownBlobIndexerPDFTextRotationAlgorithm

Known values of BlobIndexerPDFTextRotationAlgorithm that the service accepts.

KnownBlobIndexerParsingMode

Known values of BlobIndexerParsingMode that the service accepts.

KnownCharFilterNames

Known values of CharFilterName that the service accepts.

KnownCustomEntityLookupSkillLanguage

Known values of CustomEntityLookupSkillLanguage that the service accepts.

KnownEntityCategory

Known values of EntityCategory that the service accepts.

KnownEntityRecognitionSkillLanguage

Known values of EntityRecognitionSkillLanguage that the service accepts.

KnownImageAnalysisSkillLanguage

Known values of ImageAnalysisSkillLanguage that the service accepts.

KnownImageDetail

Known values of ImageDetail that the service accepts.

KnownKeyPhraseExtractionSkillLanguage

Known values of KeyPhraseExtractionSkillLanguage that the service accepts.

KnownOcrSkillLanguage

Known values of OcrSkillLanguage that the service accepts.

KnownRegexFlags

Known values of RegexFlags that the service accepts.

KnownSearchAudience

Known values for Search Audience

KnownSearchIndexerDataSourceType

Known values of SearchIndexerDataSourceType that the service accepts.

KnownSentimentSkillLanguage

Known values of SentimentSkillLanguage that the service accepts.

KnownSplitSkillLanguage

Known values of SplitSkillLanguage that the service accepts.

KnownTextSplitMode

Known values of TextSplitMode that the service accepts.

KnownTextTranslationSkillLanguage

Known values of TextTranslationSkillLanguage that the service accepts.

KnownTokenFilterNames

Known values of TokenFilterName that the service accepts.

KnownTokenizerNames

Known values of LexicalTokenizerName that the service accepts.

KnownVisualFeature

Known values of VisualFeature that the service accepts.

Functions

createSynonymMapFromFile(string, string)

Helper method to create a SynonymMap object. This is a NodeJS only method.

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter

Function Details

createSynonymMapFromFile(string, string)

Helper method to create a SynonymMap object. This is a NodeJS only method.

function createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>

Parameters

name

string

Name of the SynonymMap.

filePath

string

Path of the file that contains the Synonyms (seperated by new lines)

Returns

Promise<SynonymMap>

SynonymMap object

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter

function odata(strings: TemplateStringsArray, values: unknown[]): string

Parameters

strings

TemplateStringsArray

Array of strings for the expression

values

unknown[]

Array of values for the expression

Returns

string