@azure/search-documents package

Classes

GeographyPoint

Represents a geographic point in global coordinates.

IndexDocumentsBatch

Class used to perform batch operations with multiple documents to the index.

SearchClient

Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them.

SearchIndexClient

Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps.

SearchIndexerClient

Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets.

SearchIndexingBufferedSender

Class used to perform buffered operations against a search index, including adding, updating, and removing them.

Interfaces

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzedTokenInfo

Information about a token returned by an analyzer.

AutocompleteItem

The result of Autocomplete requests.

AutocompleteRequest

Parameters for fuzzy matching, and other autocomplete query behaviors.

AutocompleteResult

The result of Autocomplete query.

AzureActiveDirectoryApplicationCredentials

Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.

BaseCharFilter

Base type for character filters.

BaseCognitiveServicesAccount

Base type for describing any cognitive service resource attached to a skillset.

BaseDataChangeDetectionPolicy

Base type for data change detection policies.

BaseDataDeletionDetectionPolicy

Base type for data deletion detection policies.

BaseLexicalAnalyzer

Base type for analyzers.

BaseLexicalTokenizer

Base type for tokenizers.

BaseScoringFunction

Base type for functions that can modify document scores during ranking.

BaseSearchIndexerSkill

Base type for skills.

BaseTokenFilter

Base type for token filters.

ComplexField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index.

CreateOrUpdateIndexOptions

Options for create/update index operation.

CreateOrUpdateSkillsetOptions

Options for create/update skillset operation.

CreateOrUpdateSynonymMapOptions

Options for create/update synonymmap operation.

CreateorUpdateDataSourceConnectionOptions

Options for create/update datasource operation.

CreateorUpdateIndexerOptions

Options for create/update indexer operation.

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

CustomEntity

An object that contains information about the matches that were found, and related metadata.

CustomEntityAlias

A complex object that can be used to specify alternative spellings or synonyms to the root entity name.

DeleteDataSourceConnectionOptions

Options for delete datasource operation.

DeleteIndexOptions

Options for delete index operation.

DeleteIndexerOptions

Options for delete indexer operation.

DeleteSkillsetOptions

Options for delete skillset operaion.

DeleteSynonymMapOptions

Options for delete synonymmap operation.

DistanceScoringParameters

Provides parameter values to a distance scoring function.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

FacetResult

A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.

FieldMapping

Defines a mapping between a field in a data source and a target field in an index.

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing.

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

GetDocumentOptions

Options for retrieving a single document.

IndexDocumentsClient

Index Documents Client

IndexDocumentsOptions

Options for the modify index batch operation.

IndexDocumentsResult

Response containing the status of operations for all documents in the indexing request.

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersConfiguration

A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

InputFieldMappingEntry

Input field mapping for a skill.

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

ListSearchResultsPageSettings

Arguments for retrieving the next page of search results.

LuceneStandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

OutputFieldMappingEntry

Output field mapping for a skill.

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

ResourceCounter

Represents a resource's usage and quota.

ScoringProfile

Defines parameters for a search index that influence scoring in search queries.

SearchClientOptions

Client options used to configure Cognitive Search API requests.

SearchDocumentsPageResult

Response containing search page results from an index.

SearchDocumentsResult

Response containing search results from an index.

SearchDocumentsResultBase

Response containing search results from an index.

SearchIndex

Represents a search index definition, which describes the fields and search behavior of an index.

SearchIndexClientOptions

Client options used to configure Cognitive Search API requests.

SearchIndexStatistics

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

SearchIndexer

Represents an indexer.

SearchIndexerClientOptions

Client options used to configure Cognitive Search API requests.

SearchIndexerDataContainer

Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.

SearchIndexerDataSourceConnection

Represents a datasource definition, which can be used to configure an indexer.

SearchIndexerError

Represents an item- or document-level indexing error.

SearchIndexerKnowledgeStore

Definition of additional projections to azure blob, table, or files, of enriched data.

SearchIndexerKnowledgeStoreProjection

Container object for various projection selectors.

SearchIndexerKnowledgeStoreProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerLimits
SearchIndexerSkillset

A list of skills.

SearchIndexerStatus

Represents the current status and execution history of an indexer.

SearchIndexerWarning

Represents an item-level warning.

SearchIndexingBufferedSenderOptions

Options for SearchIndexingBufferedSender.

SearchRequest

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestOptions

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchResourceEncryptionKey

A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure Cognitive Search, such as indexes and synonym maps.

SearchServiceStatistics

Response from a get service statistics request. If successful, it includes service level counters and limits.

SearchSuggester

Defines how the Suggest API should apply to a group of fields in the index.

ServiceCounters

Represents service-level resource counters and quotas.

ServiceLimits

Represents various service level limits.

Similarity

Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results.

SimpleField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SuggestDocumentsResult

Response containing suggestion query results from an index.

SuggestRequest

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SynonymMap

Represents a synonym map definition.

TagScoringParameters

Provides parameter values to a tag scoring function.

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

Type Aliases

AnalyzeTextOptions

Options for analyze text operation.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

AutocompleteMode

Defines values for AutocompleteMode.

AutocompleteOptions

Options for retrieving completion text for a partial searchText.

BM25Similarity

Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter).

BlobIndexerDataToExtract

Defines values for BlobIndexerDataToExtract.
<xref:KnownBlobIndexerDataToExtract> can be used interchangeably with BlobIndexerDataToExtract, this enum contains the known values that the service supports.

Know values supported by the service

storageMetadata: Indexes just the standard blob properties and user-specified metadata.
allMetadata: Extracts metadata provided by the Azure blob storage subsystem and the content-type specific metadata (for example, metadata unique to just .png files are indexed).
contentAndMetadata: Extracts all metadata and textual content from each blob.

BlobIndexerImageAction

Defines values for BlobIndexerImageAction.
<xref:KnownBlobIndexerImageAction> can be used interchangeably with BlobIndexerImageAction, this enum contains the known values that the service supports.

Know values supported by the service

none: Ignores embedded images or image files in the data set. This is the default.
generateNormalizedImages: Extracts text from images (for example, the word "STOP" from a traffic stop sign), and embeds it into the content field. This action requires that "dataToExtract" is set to "contentAndMetadata". A normalized image refers to additional processing resulting in uniform image output, sized and rotated to promote consistent rendering when you include images in visual search results. This information is generated for each image when you use this option.
generateNormalizedImagePerPage: Extracts text from images (for example, the word "STOP" from a traffic stop sign), and embeds it into the content field, but treats PDF files differently in that each page will be rendered as an image and normalized accordingly, instead of extracting embedded images. Non-PDF file types will be treated the same as if "generateNormalizedImages" was set.

BlobIndexerPDFTextRotationAlgorithm

Defines values for BlobIndexerPDFTextRotationAlgorithm.
<xref:KnownBlobIndexerPDFTextRotationAlgorithm> can be used interchangeably with BlobIndexerPDFTextRotationAlgorithm, this enum contains the known values that the service supports.

Know values supported by the service

none: Leverages normal text extraction. This is the default.
detectAngles: May produce better and more readable text extraction from PDF files that have rotated text within them. Note that there may be a small performance speed impact when this parameter is used. This parameter only applies to PDF files, and only to PDFs with embedded text. If the rotated text appears within an embedded image in the PDF, this parameter does not apply.

BlobIndexerParsingMode

Defines values for BlobIndexerParsingMode.
<xref:KnownBlobIndexerParsingMode> can be used interchangeably with BlobIndexerParsingMode, this enum contains the known values that the service supports.

Know values supported by the service

default: Set to default for normal file processing.
text: Set to text to improve indexing performance on plain text files in blob storage.
delimitedText: Set to delimitedText when blobs are plain CSV files.
json: Set to json to extract structured content from JSON files.
jsonArray: Set to jsonArray to extract individual elements of a JSON array as separate documents in Azure Cognitive Search.
jsonLines: Set to jsonLines to extract individual JSON entities, separated by a new line, as separate documents in Azure Cognitive Search.

CharFilter

Contains the possible cases for CharFilter.

CharFilterName

Defines values for CharFilterName.
<xref:KnownCharFilterName> can be used interchangeably with CharFilterName, this enum contains the known values that the service supports.

Know values supported by the service

html_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.

CjkBigramTokenFilterScripts

Defines values for CjkBigramTokenFilterScripts.

ClassicSimilarity

Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries.

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CognitiveServicesAccount

Contains the possible cases for CognitiveServicesAccount.

CognitiveServicesAccountKey

A cognitive service resource provisioned with a key that is attached to a skillset.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

ComplexDataType

Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)'

ConditionalSkill

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.

CountDocumentsOptions

Options for performing the count operation on the index.

CreateDataSourceConnectionOptions

Options for create datasource operation.

CreateIndexOptions

Options for create index operation.

CreateIndexerOptions

Options for create indexer operation.

CreateSkillsetOptions

Options for create skillset operation.

CreateSynonymMapOptions

Options for create synonymmap operation.

CustomEntityLookupSkill

A skill looks for text from a custom, user-defined list of words and phrases.

CustomEntityLookupSkillLanguage

Defines values for CustomEntityLookupSkillLanguage.
<xref:KnownCustomEntityLookupSkillLanguage> can be used interchangeably with CustomEntityLookupSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

da: Danish
de: German
en: English
es: Spanish
fi: Finnish
fr: French
it: Italian
ko: Korean
pt: Portuguese

DataChangeDetectionPolicy

Contains the possible cases for DataChangeDetectionPolicy.

DataDeletionDetectionPolicy

Contains the possible cases for DataDeletionDetectionPolicy.

DefaultCognitiveServicesAccount

An empty object that represents the default cognitive service resource for a skillset.

DeleteDocumentsOptions

Options for the delete documents operation.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location.

DocumentExtractionSkill

A skill that extracts content from a file within the enrichment pipeline.

EdgeNGramTokenFilterSide

Defines values for EdgeNGramTokenFilterSide.

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

EntityCategory

Defines values for EntityCategory.
<xref:KnownEntityCategory> can be used interchangeably with EntityCategory, this enum contains the known values that the service supports.

Know values supported by the service

location: Entities describing a physical location.
organization: Entities describing an organization.
person: Entities describing a person.
quantity: Entities describing a quantity.
datetime: Entities describing a date and time.
url: Entities describing a URL.
email: Entities describing an email address.

EntityRecognitionSkill

Text analytics entity recognition.

EntityRecognitionSkillLanguage

Defines values for EntityRecognitionSkillLanguage.
<xref:KnownEntityRecognitionSkillLanguage> can be used interchangeably with EntityRecognitionSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

ar: Arabic
cs: Czech
zh-Hans: Chinese-Simplified
zh-Hant: Chinese-Traditional
da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
hu: Hungarian
it: Italian
ja: Japanese
ko: Korean
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
pt-BR: Portuguese (Brazil)
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field.

GetDataSourceConnectionOptions

Options for get datasource operation.

GetIndexOptions

Options for get index operation.

GetIndexStatisticsOptions

Options for get index statistics operation.

GetIndexerOptions

Options for get indexer operation.

GetIndexerStatusOptions

Options for get indexer status operation.

GetServiceStatisticsOptions

Options for get service statistics operation.

GetSkillSetOptions

Options for get skillset operation.

GetSynonymMapsOptions

Options for get synonymmaps operation.

HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

ImageAnalysisSkill

A skill that analyzes image files. It extracts a rich set of visual features based on the image content.

ImageAnalysisSkillLanguage

Defines values for ImageAnalysisSkillLanguage.
<xref:KnownImageAnalysisSkillLanguage> can be used interchangeably with ImageAnalysisSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

en: English
es: Spanish
ja: Japanese
pt: Portuguese
zh: Chinese

ImageDetail

Defines values for ImageDetail.
<xref:KnownImageDetail> can be used interchangeably with ImageDetail, this enum contains the known values that the service supports.

Know values supported by the service

celebrities: Details recognized as celebrities.
landmarks: Details recognized as landmarks.

IndexActionType

Defines values for IndexActionType.

IndexDocumentsAction

Represents an index action that operates on a document.

IndexIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexNameIterator

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexerExecutionEnvironment

Defines values for IndexerExecutionEnvironment.
<xref:KnownIndexerExecutionEnvironment> can be used interchangeably with IndexerExecutionEnvironment, this enum contains the known values that the service supports.

Know values supported by the service

standard: Indicates that Azure Cognitive Search can determine where the indexer should execute. This is the default environment when nothing is specified and is the recommended value.
private: Indicates that the indexer should run with the environment provisioned specifically for the search service. This should only be specified as the execution environment if the indexer needs to access resources securely over shared private link resources.

IndexerExecutionStatus

Defines values for IndexerExecutionStatus.

IndexerStatus

Defines values for IndexerStatus.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeyPhraseExtractionSkill

A skill that uses text analytics for key phrase extraction.

KeyPhraseExtractionSkillLanguage

Defines values for KeyPhraseExtractionSkillLanguage.
<xref:KnownKeyPhraseExtractionSkillLanguage> can be used interchangeably with KeyPhraseExtractionSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
it: Italian
ja: Japanese
ko: Korean
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
pt-BR: Portuguese (Brazil)
ru: Russian
es: Spanish
sv: Swedish

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene.

LanguageDetectionSkill

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LexicalAnalyzer

Contains the possible cases for Analyzer.

LexicalAnalyzerName

Defines values for LexicalAnalyzerName.
<xref:KnownLexicalAnalyzerName> can be used interchangeably with LexicalAnalyzerName, this enum contains the known values that the service supports.

Know values supported by the service

ar.microsoft: Microsoft analyzer for Arabic.
ar.lucene: Lucene analyzer for Arabic.
hy.lucene: Lucene analyzer for Armenian.
bn.microsoft: Microsoft analyzer for Bangla.
eu.lucene: Lucene analyzer for Basque.
bg.microsoft: Microsoft analyzer for Bulgarian.
bg.lucene: Lucene analyzer for Bulgarian.
ca.microsoft: Microsoft analyzer for Catalan.
ca.lucene: Lucene analyzer for Catalan.
zh-Hans.microsoft: Microsoft analyzer for Chinese (Simplified).
zh-Hans.lucene: Lucene analyzer for Chinese (Simplified).
zh-Hant.microsoft: Microsoft analyzer for Chinese (Traditional).
zh-Hant.lucene: Lucene analyzer for Chinese (Traditional).
hr.microsoft: Microsoft analyzer for Croatian.
cs.microsoft: Microsoft analyzer for Czech.
cs.lucene: Lucene analyzer for Czech.
da.microsoft: Microsoft analyzer for Danish.
da.lucene: Lucene analyzer for Danish.
nl.microsoft: Microsoft analyzer for Dutch.
nl.lucene: Lucene analyzer for Dutch.
en.microsoft: Microsoft analyzer for English.
en.lucene: Lucene analyzer for English.
et.microsoft: Microsoft analyzer for Estonian.
fi.microsoft: Microsoft analyzer for Finnish.
fi.lucene: Lucene analyzer for Finnish.
fr.microsoft: Microsoft analyzer for French.
fr.lucene: Lucene analyzer for French.
gl.lucene: Lucene analyzer for Galician.
de.microsoft: Microsoft analyzer for German.
de.lucene: Lucene analyzer for German.
el.microsoft: Microsoft analyzer for Greek.
el.lucene: Lucene analyzer for Greek.
gu.microsoft: Microsoft analyzer for Gujarati.
he.microsoft: Microsoft analyzer for Hebrew.
hi.microsoft: Microsoft analyzer for Hindi.
hi.lucene: Lucene analyzer for Hindi.
hu.microsoft: Microsoft analyzer for Hungarian.
hu.lucene: Lucene analyzer for Hungarian.
is.microsoft: Microsoft analyzer for Icelandic.
id.microsoft: Microsoft analyzer for Indonesian (Bahasa).
id.lucene: Lucene analyzer for Indonesian.
ga.lucene: Lucene analyzer for Irish.
it.microsoft: Microsoft analyzer for Italian.
it.lucene: Lucene analyzer for Italian.
ja.microsoft: Microsoft analyzer for Japanese.
ja.lucene: Lucene analyzer for Japanese.
kn.microsoft: Microsoft analyzer for Kannada.
ko.microsoft: Microsoft analyzer for Korean.
ko.lucene: Lucene analyzer for Korean.
lv.microsoft: Microsoft analyzer for Latvian.
lv.lucene: Lucene analyzer for Latvian.
lt.microsoft: Microsoft analyzer for Lithuanian.
ml.microsoft: Microsoft analyzer for Malayalam.
ms.microsoft: Microsoft analyzer for Malay (Latin).
mr.microsoft: Microsoft analyzer for Marathi.
nb.microsoft: Microsoft analyzer for Norwegian (Bokmål).
no.lucene: Lucene analyzer for Norwegian.
fa.lucene: Lucene analyzer for Persian.
pl.microsoft: Microsoft analyzer for Polish.
pl.lucene: Lucene analyzer for Polish.
pt-BR.microsoft: Microsoft analyzer for Portuguese (Brazil).
pt-BR.lucene: Lucene analyzer for Portuguese (Brazil).
pt-PT.microsoft: Microsoft analyzer for Portuguese (Portugal).
pt-PT.lucene: Lucene analyzer for Portuguese (Portugal).
pa.microsoft: Microsoft analyzer for Punjabi.
ro.microsoft: Microsoft analyzer for Romanian.
ro.lucene: Lucene analyzer for Romanian.
ru.microsoft: Microsoft analyzer for Russian.
ru.lucene: Lucene analyzer for Russian.
sr-cyrillic.microsoft: Microsoft analyzer for Serbian (Cyrillic).
sr-latin.microsoft: Microsoft analyzer for Serbian (Latin).
sk.microsoft: Microsoft analyzer for Slovak.
sl.microsoft: Microsoft analyzer for Slovenian.
es.microsoft: Microsoft analyzer for Spanish.
es.lucene: Lucene analyzer for Spanish.
sv.microsoft: Microsoft analyzer for Swedish.
sv.lucene: Lucene analyzer for Swedish.
ta.microsoft: Microsoft analyzer for Tamil.
te.microsoft: Microsoft analyzer for Telugu.
th.microsoft: Microsoft analyzer for Thai.
th.lucene: Lucene analyzer for Thai.
tr.microsoft: Microsoft analyzer for Turkish.
tr.lucene: Lucene analyzer for Turkish.
uk.microsoft: Microsoft analyzer for Ukrainian.
ur.microsoft: Microsoft analyzer for Urdu.
vi.microsoft: Microsoft analyzer for Vietnamese.
standard.lucene: Standard Lucene analyzer.
standardasciifolding.lucene: Standard ASCII Folding Lucene analyzer. See https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#Analyzers
keyword: Treats the entire content of a field as a single token. This is useful for data like zip codes, ids, and some product names. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html
pattern: Flexibly separates text into terms via a regular expression pattern. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html
simple: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/SimpleAnalyzer.html
stop: Divides text at non-letters; Applies the lowercase and stopword token filters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html
whitespace: An analyzer that uses the whitespace tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceAnalyzer.html

LexicalTokenizer

Contains the possible cases for Tokenizer.

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

ListDataSourceConnectionsOptions

Options for a list data sources operation.

ListIndexersOptions

Options for a list indexers operation.

ListIndexesOptions

Options for a list indexes operation.

ListSkillsetsOptions

Options for a list skillsets operation.

ListSynonymMapsOptions

Options for a list synonymMaps operation.

LuceneStandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MergeDocumentsOptions

Options for the merge documents operation.

MergeOrUploadDocumentsOptions

Options for the merge or upload documents operation.

MergeSkill

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

MicrosoftStemmingTokenizerLanguage

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

Defines values for MicrosoftTokenizerLanguage.

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

OcrSkill

A skill that extracts text from image files.

OcrSkillLanguage

Defines values for OcrSkillLanguage.
<xref:KnownOcrSkillLanguage> can be used interchangeably with OcrSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

zh-Hans: Chinese-Simplified
zh-Hant: Chinese-Traditional
cs: Czech
da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
hu: Hungarian
it: Italian
ja: Japanese
ko: Korean
nb: Norwegian (Bokmaal)
pl: Polish
pt: Portuguese
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish
ar: Arabic
ro: Romanian
sr-Cyrl: Serbian (Cyrillic, Serbia)
sr-Latn: Serbian (Latin, Serbia)
sk: Slovak

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PhoneticEncoder

Defines values for PhoneticEncoder.

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

QueryType

Defines values for QueryType.

RegexFlags

Defines values for RegexFlags.
<xref:KnownRegexFlags> can be used interchangeably with RegexFlags, this enum contains the known values that the service supports.

Know values supported by the service

CANON_EQ: Enables canonical equivalence.
CASE_INSENSITIVE: Enables case-insensitive matching.
COMMENTS: Permits whitespace and comments in the pattern.
DOTALL: Enables dotall mode.
LITERAL: Enables literal parsing of the pattern.
MULTILINE: Enables multiline mode.
UNICODE_CASE: Enables Unicode-aware case folding.
UNIX_LINES: Enables Unix lines mode.

ResetIndexerOptions

Options for reset indexer operation.

RunIndexerOptions

Options for run indexer operation.

ScoringFunction

Contains the possible cases for ScoringFunction.

ScoringFunctionAggregation

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

Defines values for ScoringFunctionInterpolation.

ScoringStatistics

Defines values for ScoringStatistics.

SearchField

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

SearchFieldDataType

Defines values for SearchFieldDataType. Possible values include: 'Edm.String', 'Edm.Int32', 'Edm.Int64', 'Edm.Double', 'Edm.Boolean', 'Edm.DateTimeOffset', 'Edm.GeographyPoint', 'Collection(Edm.String)', 'Collection(Edm.Int32)', 'Collection(Edm.Int64)', 'Collection(Edm.Double)', 'Collection(Edm.Boolean)', 'Collection(Edm.DateTimeOffset)', 'Collection(Edm.GeographyPoint)'

SearchIndexerDataSourceType

Defines values for SearchIndexerDataSourceType.
<xref:KnownSearchIndexerDataSourceType> can be used interchangeably with SearchIndexerDataSourceType, this enum contains the known values that the service supports.

Know values supported by the service

azuresql: Indicates an Azure SQL datasource.
cosmosdb: Indicates a CosmosDB datasource.
azureblob: Indicates an Azure Blob datasource.
azuretable: Indicates an Azure Table datasource.
mysql: Indicates a MySql datasource.
adlsgen2: Indicates an ADLS Gen2 datasource.

SearchIndexerKnowledgeStoreBlobProjectionSelector

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreFileProjectionSelector

Projection definition for what data to store in Azure Files.

SearchIndexerKnowledgeStoreObjectProjectionSelector

Projection definition for what data to store in Azure Blob.

SearchIndexerKnowledgeStoreTableProjectionSelector

Description for what data to store in Azure Tables.

SearchIndexerSkill

Contains the possible cases for Skill.

SearchIndexingBufferedSenderDeleteDocumentsOptions

Options for SearchIndexingBufferedSenderDeleteDocuments.

SearchIndexingBufferedSenderFlushDocumentsOptions

Options for SearchIndexingBufferedSenderFlushDocuments.

SearchIndexingBufferedSenderMergeDocumentsOptions

Options for SearchIndexingBufferedSenderMergeDocuments.

SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions

Options for SearchIndexingBufferedSenderMergeOrUploadDocuments.

SearchIndexingBufferedSenderUploadDocumentsOptions

Options for SearchIndexingBufferedSenderUploadDocuments.

SearchIterator

An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

SearchMode

Defines values for SearchMode.

SearchOptions

Options for committing a full search request.

SearchResult

Contains a document found by a search query, plus associated metadata.

SentimentSkill

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.

SentimentSkillLanguage

Defines values for SentimentSkillLanguage.
<xref:KnownSentimentSkillLanguage> can be used interchangeably with SentimentSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
it: Italian
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish

ShaperSkill

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

SimilarityAlgorithm

Contains the possible cases for Similarity.

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SnowballTokenFilterLanguage

Defines values for SnowballTokenFilterLanguage.

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SplitSkill

A skill to split a string into chunks of text.

SplitSkillLanguage

Defines values for SplitSkillLanguage.
<xref:KnownSplitSkillLanguage> can be used interchangeably with SplitSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

da: Danish
de: German
en: English
es: Spanish
fi: Finnish
fr: French
it: Italian
ko: Korean
pt: Portuguese

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene.

StemmerTokenFilterLanguage

Defines values for StemmerTokenFilterLanguage.

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsList

Defines values for StopwordsList.

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene.

SuggestOptions

Options for retrieving suggestions based on the searchText.

SuggestResult

A result containing a document found by a suggestion query, plus associated metadata.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags.

TextSplitMode

Defines values for TextSplitMode.
<xref:KnownTextSplitMode> can be used interchangeably with TextSplitMode, this enum contains the known values that the service supports.

Know values supported by the service

pages: Split the text into individual pages.
sentences: Split the text into individual sentences.

TextTranslationSkill

A skill to translate text from one language to another.

TextTranslationSkillLanguage

Defines values for TextTranslationSkillLanguage.
<xref:KnownTextTranslationSkillLanguage> can be used interchangeably with TextTranslationSkillLanguage, this enum contains the known values that the service supports.

Know values supported by the service

af: Afrikaans
ar: Arabic
bn: Bangla
bs: Bosnian (Latin)
bg: Bulgarian
yue: Cantonese (Traditional)
ca: Catalan
zh-Hans: Chinese Simplified
zh-Hant: Chinese Traditional
hr: Croatian
cs: Czech
da: Danish
nl: Dutch
en: English
et: Estonian
fj: Fijian
fil: Filipino
fi: Finnish
fr: French
de: German
el: Greek
ht: Haitian Creole
he: Hebrew
hi: Hindi
mww: Hmong Daw
hu: Hungarian
is: Icelandic
id: Indonesian
it: Italian
ja: Japanese
sw: Kiswahili
tlh: Klingon
ko: Korean
lv: Latvian
lt: Lithuanian
mg: Malagasy
ms: Malay
mt: Maltese
nb: Norwegian
fa: Persian
pl: Polish
pt: Portuguese
otq: Queretaro Otomi
ro: Romanian
ru: Russian
sm: Samoan
sr-Cyrl: Serbian (Cyrillic)
sr-Latn: Serbian (Latin)
sk: Slovak
sl: Slovenian
es: Spanish
sv: Swedish
ty: Tahitian
ta: Tamil
te: Telugu
th: Thai
to: Tongan
tr: Turkish
uk: Ukrainian
ur: Urdu
vi: Vietnamese
cy: Welsh
yua: Yucatec Maya

TokenCharacterKind

Defines values for TokenCharacterKind.

TokenFilter

Contains the possible cases for TokenFilter.

TokenFilterName

Defines values for TokenFilterName.
<xref:KnownTokenFilterName> can be used interchangeably with TokenFilterName, this enum contains the known values that the service supports.

Know values supported by the service

arabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html
apostrophe: Strips all characters after an apostrophe (including the apostrophe itself). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/tr/ApostropheFilter.html
asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
cjk_bigram: Forms bigrams of CJK terms that are generated from the standard tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html
cjk_width: Normalizes CJK width differences. Folds fullwidth ASCII variants into the equivalent basic Latin, and half-width Katakana variants into the equivalent Kana. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html
classic: Removes English possessives, and dots from acronyms. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicFilter.html
common_grams: Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html
edgeNGram_v2: Generates n-grams of the given size(s) starting from the front or the back of an input token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
german_normalization: Normalizes German characters according to the heuristics of the German2 snowball algorithm. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html
hindi_normalization: Normalizes text in Hindi to remove some differences in spelling variations. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html
indic_normalization: Normalizes the Unicode representation of text in Indian languages. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html
keyword_repeat: Emits each incoming token twice, once as keyword and once as non-keyword. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html
kstem: A high-performance kstem filter for English. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/en/KStemFilter.html
length: Removes words that are too long or too short. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html
limit: Limits the number of tokens while indexing. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html
lowercase: Normalizes token text to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.htm
nGram_v2: Generates n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html
persian_normalization: Applies normalization for Persian. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html
phonetic: Create tokens for phonetic matches. See https://lucene.apache.org/core/4_10_3/analyzers-phonetic/org/apache/lucene/analysis/phonetic/package-tree.html
porter_stem: Uses the Porter stemming algorithm to transform the token stream. See http://tartarus.org/~martin/PorterStemmer
reverse: Reverses the token string. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
scandinavian_normalization: Normalizes use of the interchangeable Scandinavian characters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html
scandinavian_folding: Folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o. It also discriminates against use of double vowels aa, ae, ao, oe and oo, leaving just the first one. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html
shingle: Creates combinations of tokens as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html
snowball: A filter that stems words using a Snowball-generated stemmer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html
sorani_normalization: Normalizes the Unicode representation of Sorani text. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html
stemmer: Language specific stemming filter. See https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters
stopwords: Removes stop words from a token stream. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html
trim: Trims leading and trailing whitespace from tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html
truncate: Truncates the terms to a specific length. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html
unique: Filters out tokens with same text as the previous token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html
uppercase: Normalizes token text to upper case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html
word_delimiter: Splits words into subwords and performs optional transformations on subword groups.

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

UploadDocumentsOptions

Options for the upload documents operation.

VisualFeature

Defines values for VisualFeature.
<xref:KnownVisualFeature> can be used interchangeably with VisualFeature, this enum contains the known values that the service supports.

Know values supported by the service

adult: Visual features recognized as adult persons.
brands: Visual features recognized as commercial brands.
categories: Categories.
description: Description.
faces: Visual features recognized as people faces.
objects: Visual features recognized as objects.
tags: Tags.

WebApiSkill

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Enums

KnownAnalyzerNames

Defines values for AnalyzerName. See https://docs.microsoft.com/rest/api/searchservice/Language-support

KnownBlobIndexerDataToExtract

Known values of <xref:BlobIndexerDataToExtract> that the service accepts.

KnownBlobIndexerImageAction

Known values of <xref:BlobIndexerImageAction> that the service accepts.

KnownBlobIndexerPDFTextRotationAlgorithm

Known values of <xref:BlobIndexerPDFTextRotationAlgorithm> that the service accepts.

KnownBlobIndexerParsingMode

Known values of <xref:BlobIndexerParsingMode> that the service accepts.

KnownCharFilterName

Known values of <xref:CharFilterName> that the service accepts.

KnownCharFilterNames

Defines values for CharFilterName.

KnownCustomEntityLookupSkillLanguage

Known values of <xref:CustomEntityLookupSkillLanguage> that the service accepts.

KnownEntityCategory

Known values of <xref:EntityCategory> that the service accepts.

KnownEntityRecognitionSkillLanguage

Known values of <xref:EntityRecognitionSkillLanguage> that the service accepts.

KnownImageAnalysisSkillLanguage

Known values of <xref:ImageAnalysisSkillLanguage> that the service accepts.

KnownImageDetail

Known values of <xref:ImageDetail> that the service accepts.

KnownKeyPhraseExtractionSkillLanguage

Known values of <xref:KeyPhraseExtractionSkillLanguage> that the service accepts.

KnownLexicalAnalyzerName

Known values of <xref:LexicalAnalyzerName> that the service accepts.

KnownOcrSkillLanguage

Known values of <xref:OcrSkillLanguage> that the service accepts.

KnownRegexFlags

Known values of <xref:RegexFlags> that the service accepts.

KnownSearchIndexerDataSourceType

Known values of <xref:SearchIndexerDataSourceType> that the service accepts.

KnownSentimentSkillLanguage

Known values of <xref:SentimentSkillLanguage> that the service accepts.

KnownSplitSkillLanguage

Known values of <xref:SplitSkillLanguage> that the service accepts.

KnownTextSplitMode

Known values of <xref:TextSplitMode> that the service accepts.

KnownTextTranslationSkillLanguage

Known values of <xref:TextTranslationSkillLanguage> that the service accepts.

KnownTokenFilterName

Known values of <xref:TokenFilterName> that the service accepts.

KnownTokenFilterNames

Defines values for TokenFilterName.

KnownTokenizerNames

Defines values for TokenizerName.

KnownVisualFeature

Known values of <xref:VisualFeature> that the service accepts.

Functions

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter

Function Details

odata(TemplateStringsArray, unknown[])

Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter

function odata(strings: TemplateStringsArray, values: unknown[])

Parameters

strings

TemplateStringsArray

Array of strings for the expression

values

unknown[]

Array of values for the expression

Returns

string