Language analyzers (Azure Search Service REST API)

Are you new to search technologies and custom analyzers in particular? Learn how custom analyzers add value to search applications.


Searchable fields undergo analysis that most frequently involves word-breaking, text normalization, and filtering out terms. By default, searchable fields in Azure Search are analyzed with the Apache Lucene Standard analyzer (standard lucene) which breaks text into elements following the "Unicode Text Segmentation" rules. Additionally, the standard analyzer converts all characters to their lower case form. Both indexed documents and search terms go through the analysis during indexing and query processing.

Azure Search supports a variety of languages. There are text analyzers for each supported language that account for characteristics of that language. Azure Search offers two types of language analyzers:

  • 35 analyzers backed by Lucene.

  • 50 analyzers backed by proprietary Microsoft natural language processing technology used in Office and Bing.

    Some developers might prefer the more familiar, simple, open-source solution of Lucene. Lucene language analyzers are faster, but the Microsoft analyzers have advanced capabilities, such as lemmatization, word decompounding (in languages like German, Danish, Dutch, Swedish, Norwegian, Estonian, Finish, Hungarian, Slovak) and entity recognition (URLs, emails, dates, numbers). If possible, you should run comparisons of both the Microsoft and Lucene analyzers to decide which one is a better fit.

    How they compare

    The Lucene analyzer for English extends the standard analyzer. It removes possessives (trailing 's) from words, applies stemming as per Porter Stemming algorithm, and removes English stop words.

    In comparison, the Microsoft analyzer performs lemmatization instead of stemming. This means it can handle inflected and irregular word forms much better what results in more relevant search results (watch module 7 of Azure Search MVA presentation for more details).

    Indexing with Microsoft analyzers is on average two to three times slower than their Lucene equivalents, depending on the language. Search performance should not be significantly affected for average size queries.


    The Search Analyzer Demo provides side-by-side comparison of results produced by the standard Lucene analyzer, Lucene's English language analyzer, and Microsoft's English natural language processor. For each search input you provide, results from each analyzer are displayed in adjacent panes.


    For each field in the index definition, you can set the analyzer property to an analyzer name that specifies which language and vendor. The same analyzer will be applied when indexing and searching for that field. For example, you can have separate fields for English, French, and Spanish hotel descriptions that exist side-by-side in the same index.

    Use the searchFields query parameter to specify which language-specific field to search against in your queries. You can review query examples that include the analyzer property in Search Documents.
    Analyzer list
    Below is the list of supported languages together with Lucene and Microsoft analyzer names.
    See Create Index (Azure Search Service REST API) for details on how to specify the language analyzer on a field in the index.


Support for Microsoft's natural language processors via the REST API in Azure Search is now out of preview and in the generally available release. Additionally, both Lucene analyzers and Microsoft's natural language processors are available in the Azure Search .NET library.

Analyzer List

Below is the list of supported languages together with Lucene and Microsoft analyzer names.

Language Microsoft Analyzer Name Lucene Analyzer Name
Arabic ar.lucene
Armenian hy.lucene
Basque eu.lucene
Bulgarian bg.lucene
Catalan ca.lucene
Chinese Simplified zh-Hans.lucene
Chinese Traditional zh-Hant.lucene
Czech cs.lucene
Danish da.lucene
Dutch nl.lucene
English en.lucene
Finnish fi.lucene
French fr.lucene
Galician gl.lucene
German de.lucene
Greek el.lucene
Hindi hi.lucene
Hungarian hu.lucene
Indonesian (Bahasa) id.lucene
Irish ga.lucene
Italian it.lucene
Japanese ja.lucene
Korean ko.lucene
Latvian lv.lucene
Malay (Latin)
Norwegian no.lucene
Persian fa.lucene
Polish pl.lucene
Portuguese (Brazil) pt-Br.lucene
Portuguese (Portugal) pt-Pt.lucene
Romanian ro.lucene
Russian ru.lucene
Serbian (Cyrillic)
Serbian (Latin)
Spanish es.lucene
Swedish sv.lucene
Thai th.lucene
Turkish tr.lucene

Additionally Azure Search provides language-agnostic analyzer configurations:

Standard ASCII Folding standardasciifolding.lucene - Unicode text segmentation (Standard Tokenizer)
- ASCII folding filter - converts Unicode characters that don't belong to the set of first 127 ASCII characters into their ASCII equivalents. This is useful for removing diacritics

All analyzers with names annotated with Lucene are powered by Apache Lucene's language analyzers. More information about the ASCII folding filter can be found Class ASCIIFoldingFilter.

See also

Create Index (Azure Search Service REST API)
AnalyzerName Class