Microsoft.Azure.Search.Models Namespace

Contains classes and interfaces that support access to attributes of Search resources.

Classes

AccessCondition

Additional parameters for a set of operations.

Analyzer

Abstract base class for analyzers.

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzerName

Defines the names of all text analyzers supported by Azure Search.

AnalyzerName.AsString

The names of all of the analyzers as plain strings.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

BlobExtractionMode

Defines which parts of a blob will be indexed by the blob storage indexer.

CharFilter

Abstract base class for character filters.

CharFilterName

Defines the names of all character filters supported by Azure Search.

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from StandardTokenizer. This token filter is implemented using Apache Lucene.

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index.

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

DataChangeDetectionPolicy

Abstract base class for data change detection policies.

DataContainer

Represents information about the entity (such as Azure SQL table or DocumentDb collection) that will be indexed.

DataDeletionDetectionPolicy

Abstract base class for data deletion detection policies.

DataSource

Represents a datasource definition in Azure Search, which can be used to configure an indexer.

DataSourceCredentials

Represents credentials that can be used to connect to a datasource.

DataSourceListResult

Response from a List Datasources request. If successful, it includes the full definitions of all datasources.

DataSourceType

Defines the type of an Azure Search datasource.

DataType

Defines the data type of a field in an Azure Search index.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location.

DistanceScoringParameters

Provides parameter values to a distance scoring function.

Document

Represents a document as a property bag. This is useful for scenarios where the index schema is only known at run-time.

DocumentIndexResult

Response containing the status of operations for all documents in the indexing request.

DocumentSearchResult

Response containing search results from an Azure Search index.

DocumentSearchResult<T>

Response containing search results from an Azure Search index.

DocumentSearchResultBase<TResult,TDoc>

Response containing search results from an Azure Search index.

DocumentSuggestResult

Response containing suggestion query results from an Azure Search index.

DocumentSuggestResult<T>

Response containing suggestion query results from an Azure Search index.

DocumentSuggestResultBase<TResult,TDoc>

Response containing suggestion query results from an Azure Search index.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenFilterV2

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

ExtensibleEnum<T>

Abstract base class for types that act like enums, but can be extended with arbitrary string values.

FacetResult

A single bucket of a facet query result that reports the number of documents with a field falling within a particular range or having a particular value or interval.

FacetResults

Contains all the results of a facet query, organized as a collection of buckets for each faceted field.

Field

Represents a field in an index definition in Azure Search, which describes the name, data type, and search behavior of a field.

FieldMapping

Defines a mapping between a field in a data source and a target field in an index.

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing.

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field.

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

HitHighlights

Contains all the hit highlights for a document, organized as a collection of text fragments for each applicable field.

Index

Represents an index definition in Azure Search, which describes the fields and search behavior of an index.

IndexAction

Represents an index action that operates on a document.

IndexAction<T>

Represents an index action that operates on a document.

IndexActionBase<T>

Abstract base class for index actions that operate on a document.

IndexBatch

Contains a batch of document upload, merge, and/or delete operations to send to the Azure Search index.

IndexBatch<T>

Contains a batch of upload, merge, and/or delete actions to send to the Azure Search index.

IndexBatchBase<TAction,TDoc>

Abstract base class for batches of upload, merge, and/or delete actions to send to the Azure Search index.

Indexer

Represents an Azure Search indexer.

IndexerExecutionInfo

Represents the current status and execution history of an indexer.

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexerListResult

Response from a List Indexers request. If successful, it includes the full definitions of all indexers.

IndexGetStatisticsResult

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersExtensions

Defines extension methods for the IndexingParameters class.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

IndexListResult

Response from a List Indexes request. If successful, it includes the full definitions of all indexes.

ItemError

Represents an item- or document-level indexing error.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene.

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

KeywordTokenizerV2

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field.

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenFilterV2

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PathHierarchyTokenizerV2

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

RangeFacetResult<T>

A single bucket of a range facet query result that reports the number of documents with a field value falling within a particular range.

RegexFlags

Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer.

ScoringFunction

Abstract base class for functions that can modify document scores during ranking.

ScoringParameter

Represents a parameter value to be used in scoring functions (for example, referencePointParameter).

ScoringProfile

Defines parameters for an Azure Search index that influence scoring in search queries.

SearchContinuationToken

Encapsulates state required to continue fetching search results. This is necessary when Azure Search cannot fulfill a search request with a single response.

SearchParameters

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchParametersPayload

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestOptions

Additional parameters for a set of operations.

SearchResult

Contains a document found by a search query, plus associated metadata.

SearchResult<T>

Contains a document found by a search query, plus associated metadata.

SearchResultBase<T>

Abstract base class for a result containing a document found by a search query, plus associated metadata.

SerializePropertyNamesAsCamelCaseAttribute

Indicates that the public properties of a model class should be serialized as camel-case in order to match the field names of an Azure Search index.

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

StandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

StandardTokenizerV2

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene.

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene.

Suggester

Defines how the Suggest API should apply to a group of fields in the index.

SuggestParameters

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SuggestParametersPayload

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SuggestResult

Contains a document found by a suggestion query, plus associated metadata.

SuggestResult<T>

Contains a document found by a suggestion query, plus associated metadata.

SuggestResultBase<T>

Abstract base class for a result containing a document found by a suggestion query, plus associated metadata.

SynonymMap

Represents a synonym map definition in Azure Search.

SynonymMapFormat

Defines the format of a Azure Search synonymmap.

SynonymMapListResult

Response from a List SynonymMaps request. If successful, it includes the full definitions of all synonym maps.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags.

TagScoringParameters

Provides parameter values to a tag scoring function.

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

TokenFilter

Abstract base class for token filters.

TokenFilterName

Defines the names of all token filters supported by Azure Search.

TokenInfo

Information about a token returned by an analyzer.

Tokenizer

Abstract base class for tokenizers.

TokenizerName

Defines the names of all tokenizers supported by Azure Search.

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

ValueFacetResult<T>

A single bucket of a simple or interval facet query result that reports the number of documents with a field falling within a particular interval or having a specific value.

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Interfaces

IResourceWithETag

Model classes that implement this interface represent resources that are persisted with an ETag version on the server.

Enums

CjkBigramTokenFilterScripts

Defines values for CjkBigramTokenFilterScripts.

EdgeNGramTokenFilterSide

Defines values for EdgeNGramTokenFilterSide.

FacetType

Specifies the type of a facet query result.

IndexActionType

Defines values for IndexActionType.

IndexerExecutionStatus

Defines values for IndexerExecutionStatus.

IndexerStatus

Defines values for IndexerStatus.

MicrosoftStemmingTokenizerLanguage

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

Defines values for MicrosoftTokenizerLanguage.

PhoneticEncoder

Defines values for PhoneticEncoder.

QueryType

Defines values for QueryType.

ScoringFunctionAggregation

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

Defines values for ScoringFunctionInterpolation.

SearchMode

Defines values for SearchMode.

SnowballTokenFilterLanguage

Defines values for SnowballTokenFilterLanguage.

StemmerTokenFilterLanguage

Defines values for StemmerTokenFilterLanguage.

StopwordsList

Defines values for StopwordsList.

SuggesterSearchMode

Defines values for SuggesterSearchMode.

TokenCharacterKind

Defines values for TokenCharacterKind.