Microsoft.Azure.Search.Models Namespace

Classes

AccessCondition

Additional parameters for a set of operations.

Analyzer

Abstract base class for analyzers.

AnalyzeRequest

Specifies some text and analysis components used to break that text into tokens.

AnalyzeResult

The result of testing an analyzer on text.

AnalyzerName.AsString

The names of all of the analyzers as plain strings.

AsciiFoldingTokenFilter

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

AutocompleteItem

The result of Autocomplete requests.

AutocompleteParameters

Additional parameters for AutocompleteGet operation.

AutocompleteResult

The result of Autocomplete query.

CharFilter

Abstract base class for character filters.

CjkBigramTokenFilter

Forms bigrams of CJK terms that are generated from StandardTokenizer. This token filter is implemented using Apache Lucene.

ClassicTokenizer

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CognitiveServices

Abstract base class for describing any cognitive service resource attached to the skillset.

CognitiveServicesByKey

A cognitive service resource provisioned with a key that is attached to a skillset.

CommonGramTokenFilter

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

ConditionalSkill

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.

CorsOptions

Defines options to control Cross-Origin Resource Sharing (CORS) for an index.

CustomAnalyzer

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

DataChangeDetectionPolicy

Abstract base class for data change detection policies.

DataContainer

Represents information about the entity (such as Azure SQL table or DocumentDb collection) that will be indexed.

DataDeletionDetectionPolicy

Abstract base class for data deletion detection policies.

DataSource

Represents a datasource definition, which can be used to configure an indexer.

DataSourceCredentials

Represents credentials that can be used to connect to a datasource.

DataSourceListResult

Response from a List Datasources request. If successful, it includes the full definitions of all datasources.

DataType.AsString

The names of all of the data types as plain strings.

DataTypeExtensions

Defines extension methods for DataType.

DefaultCognitiveServices

An empty object that represents the default cognitive service resource for a skillset.

DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

Defines a function that boosts scores based on distance from a geographic location.

DistanceScoringParameters

Provides parameter values to a distance scoring function.

Document

Represents a document as a property bag. This is useful for scenarios where the index schema is only known at run-time.

DocumentIndexResult

Response containing the status of operations for all documents in the indexing request.

DocumentSearchResult<T>

Response containing search results from an index.

DocumentSuggestResult<T>

Response containing suggestion query results from an index.

EdgeNGramTokenFilter

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenFilterV2

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenizer

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

EntityRecognitionSkill

Text analytics entity recognition.

FacetResult

A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.

Field

Represents a field in an index definition, which describes the name, data type, and search behavior of a field.

FieldMapping

Defines a mapping between a field in a data source and a target field in an index.

FieldMappingFunction

Represents a function that transforms a value from a data source before indexing.

FreshnessScoringFunction

Defines a function that boosts scores based on the value of a date-time field.

FreshnessScoringParameters

Provides parameter values to a freshness scoring function.

HighWaterMarkChangeDetectionPolicy

Defines a data change detection policy that captures changes based on the value of a high water mark column.

ImageAnalysisSkill

A skill that analyzes image files. It extracts a rich set of visual features based on the image content.

Index

Represents a search index definition, which describes the fields and search behavior of an index.

IndexAction

Provides factory methods for creating an index action that operates on a document.

IndexAction<T>

Represents an index action that operates on a document.

IndexBatch

Provides factory methods for creating a batch of document write operations to send to the search index.

IndexBatch<T>

Contains a batch of document write actions to send to the index.

Indexer

Represents an indexer.

IndexerExecutionInfo

Represents the current status and execution history of an indexer.

IndexerExecutionResult

Represents the result of an individual indexer execution.

IndexerLimits
IndexerListResult

Response from a List Indexers request. If successful, it includes the full definitions of all indexers.

IndexGetStatisticsResult

Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

IndexingParameters

Represents parameters for indexer execution.

IndexingParametersExtensions

Defines extension methods for the IndexingParameters class.

IndexingResult

Status of an indexing operation for a single document.

IndexingSchedule

Represents a schedule for indexer execution.

IndexListResult

Response from a List Indexes request. If successful, it includes the full definitions of all indexes.

InputFieldMappingEntry

Input field mapping for a skill.

ItemError

Represents an item- or document-level indexing error.

ItemWarning

Represents an item-level warning.

KeepTokenFilter

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeyPhraseExtractionSkill

A skill that uses text analytics for key phrase extraction.

KeywordMarkerTokenFilter

Marks terms as keywords. This token filter is implemented using Apache Lucene.

KeywordTokenizer

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

KeywordTokenizerV2

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

LanguageDetectionSkill

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.

LengthTokenFilter

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LimitTokenFilter

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

MagnitudeScoringFunction

Defines a function that boosts scores based on the magnitude of a numeric field.

MagnitudeScoringParameters

Provides parameter values to a magnitude scoring function.

MappingCharFilter

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MergeSkill

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.

MicrosoftLanguageStemmingTokenizer

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

Divides text using language-specific rules.

NamedEntityRecognitionSkill

Text analytics named entity recognition. This skill is deprecated in favor of EntityRecognitionSkill

NGramTokenFilter

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenFilterV2

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenizer

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

OcrSkill

A skill that extracts text from image files.

OutputFieldMappingEntry

Output field mapping for a skill.

PathHierarchyTokenizer

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PathHierarchyTokenizerV2

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternAnalyzer

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

PatternCaptureTokenFilter

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

PhoneticTokenFilter

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

RangeFacetResult<T>

A single bucket of a range facet query result that reports the number of documents with a field value falling within a particular range.

ResourceCounter

Represents a resource's usage and quota.

ScoringFunction

Abstract base class for functions that can modify document scores during ranking.

ScoringParameter

Represents a parameter value to be used in scoring functions (for example, referencePointParameter).

ScoringProfile

Defines parameters for a search index that influence scoring in search queries.

SearchContinuationToken

Encapsulates state required to continue fetching search results. This is necessary when Azure Cognitive Search cannot fulfill a search request with a single response.

SearchParameters

Parameters for filtering, sorting, faceting, paging, and other search query behaviors.

SearchRequestOptions

Additional parameters for a set of operations.

SearchResult<T>

Contains a document found by a search query, plus associated metadata.

SentimentSkill

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.

SerializePropertyNamesAsCamelCaseAttribute

Indicates that the public properties of a model type should be serialized as camel-case in order to match the field names of a search index.

ServiceCounters

Represents service-level resource counters and quotas.

ServiceLimits

Represents various service level limits.

ServiceStatistics

Response from a get service statistics request. If successful, it includes service level counters and limits.

ShaperSkill

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).

ShingleTokenFilter

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

Skill

Abstract base class for skills.

Skillset

A list of skills.

SkillsetListResult

Response from a list Skillset request. If successful, it includes the full definitions of all skillsets.

SnowballTokenFilter

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SoftDeleteColumnDeletionDetectionPolicy

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

SplitSkill

A skill to split a string into chunks of text.

SqlIntegratedChangeTrackingPolicy

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StandardAnalyzer

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

StandardTokenizer

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

StandardTokenizerV2

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

StemmerOverrideTokenFilter

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

StemmerTokenFilter

Language specific stemming filter. This token filter is implemented using Apache Lucene.

StopAnalyzer

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsTokenFilter

Removes stop words from a token stream. This token filter is implemented using Apache Lucene.

Suggester

Defines how the Suggest API should apply to a group of fields in the index.

SuggestParameters

Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

SuggestResult<T>

A result containing a document found by a suggestion query, plus associated metadata.

SynonymMap

Represents a synonym map definition.

SynonymMapListResult

Response from a List SynonymMaps request. If successful, it includes the full definitions of all synonym maps.

SynonymTokenFilter

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

Defines a function that boosts scores of documents with string values matching a given list of tags.

TagScoringParameters

Provides parameter values to a tag scoring function.

TextTranslationSkill

A skill to translate text from one language to another.

TextWeights

Defines weights on index fields for which matches should boost scoring in search queries.

TokenFilter

Abstract base class for token filters.

TokenInfo

Information about a token returned by an analyzer.

Tokenizer

Abstract base class for tokenizers.

TruncateTokenFilter

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

ValueFacetResult<T>

A single bucket of a simple or interval facet query result that reports the number of documents with a field falling within a particular interval or having a specific value.

WebApiSkill

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.

WordDelimiterTokenFilter

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Structs

AnalyzerName

Defines the names of all text analyzers supported by Azure Cognitive Search.

BlobExtractionMode

Defines which parts of a blob will be indexed by the blob storage indexer.

CharFilterName

Defines the names of all character filters supported by Azure Cognitive Search.

DataSourceType

Defines the type of a datasource.

DataType

Defines the data type of a field in a search index.

NamedEntityRecognitionSkillLanguage

Defines the format of NamedEntityRecognitionSkill supported language codes.

RegexFlags

Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer.

TokenFilterName

Defines the names of all token filters supported by Azure Cognitive Search.

TokenizerName

Defines the names of all tokenizers supported by Azure Cognitive Search.

Interfaces

IResourceWithETag

Model classes that implement this interface represent resources that are persisted with an ETag version on the server.

Enums

AutocompleteMode

Defines values for AutocompleteMode.

CjkBigramTokenFilterScripts

Defines values for CjkBigramTokenFilterScripts.

EdgeNGramTokenFilterSide

Defines values for EdgeNGramTokenFilterSide.

EntityCategory

Defines values for EntityCategory.

EntityRecognitionSkillLanguage

Defines values for EntityRecognitionSkillLanguage.

FacetType

Specifies the type of a facet query result.

ImageAnalysisSkillLanguage

Defines values for ImageAnalysisSkillLanguage.

ImageDetail

Defines values for ImageDetail.

IndexActionType

Defines values for IndexActionType.

IndexerExecutionStatus

Defines values for IndexerExecutionStatus.

IndexerStatus

Defines values for IndexerStatus.

KeyPhraseExtractionSkillLanguage

Defines values for KeyPhraseExtractionSkillLanguage.

MicrosoftStemmingTokenizerLanguage

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

Defines values for MicrosoftTokenizerLanguage.

NamedEntityCategory

Defines values for NamedEntityCategory. This is deprecated, use EntityCategory instead

OcrSkillLanguage

Defines values for OcrSkillLanguage.

PhoneticEncoder

Defines values for PhoneticEncoder.

QueryType

Defines values for QueryType.

ScoringFunctionAggregation

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

Defines values for ScoringFunctionInterpolation.

SearchMode

Defines values for SearchMode.

SentimentSkillLanguage

Defines values for SentimentSkillLanguage.

SnowballTokenFilterLanguage

Defines values for SnowballTokenFilterLanguage.

SplitSkillLanguage

Defines values for SplitSkillLanguage.

StemmerTokenFilterLanguage

Defines values for StemmerTokenFilterLanguage.

StopwordsList

Defines values for StopwordsList.

TextExtractionAlgorithm

Defines values for TextExtractionAlgorithm.

TextSplitMode

Defines values for TextSplitMode.

TextTranslationSkillLanguage

Defines values for TextTranslationSkillLanguage.

TokenCharacterKind

Defines values for TokenCharacterKind.

VisualFeature

Defines values for VisualFeature.