TextAnalyticsClient Class

The Text Analytics API is a suite of text analytics web services built with best-in-class Microsoft machine learning algorithms. The API can be used to analyze unstructured text for tasks such as sentiment analysis, key phrase extraction, and language detection. No training data is needed to use this API - just bring your text data. This API uses advanced natural language processing techniques to deliver best in class predictions.

Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview

Inheritance
azure.ai.textanalytics._base_client.TextAnalyticsClientBase
TextAnalyticsClient

Constructor

TextAnalyticsClient(endpoint, credential, **kwargs)

Parameters

endpoint
str
Required

Supported Cognitive Services or Text Analytics resource endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

credential
AzureKeyCredential or <xref:azure.core.credentials.TokenCredential>
Required

Credentials needed for the client to connect to Azure. This can be the an instance of AzureKeyCredential if using a cognitive services/text analytics API key or a token credential from identity.

default_country_hint
str
Required

Sets the default country_hint to use for all operations. Defaults to "US". If you don't want to use a country hint, pass the string "none".

default_language
str
Required

Sets the default language to use for all operations. Defaults to "en".

api_version
str or TextAnalyticsApiVersion
Required

The API version of the service to use for requests. It defaults to the latest service version. Setting to an older version may result in reduced feature compatibility.

Examples

Creating the TextAnalyticsClient with endpoint and API key.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient
   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))

Creating the TextAnalyticsClient with endpoint and token credential from Azure Active Directory.


   from azure.ai.textanalytics import TextAnalyticsClient
   from azure.identity import DefaultAzureCredential

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   credential = DefaultAzureCredential()

   text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)

Methods

analyze_sentiment

Analyze sentiment for a batch of documents. Turn on opinion mining with show_opinion_mining.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

New in version v3.1-preview: The show_opinion_mining parameter. The string_index_type parameter.

begin_analyze_actions

Start a long-running operation to perform a variety of text analysis actions over a batch of documents.

begin_analyze_healthcare_entities

Analyze healthcare entities and identify relationships between these entities in a batch of documents.

Entities are associated with references that can be found in existing knowledge bases, such as UMLS, CHV, MSH, etc.

We also extract the relations found between entities, for example in "The subject took 100 mg of ibuprofen", we would extract the relationship between the "100 mg" dosage and the "ibuprofen" medication.

NOTE: this endpoint is currently in gated preview, meaning your subscription needs to be allow-listed for you to use this endpoint. More information about that here: https://aka.ms/text-analytics-health-request-access

detect_language

Detect language for a batch of documents.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

extract_key_phrases

Extract key phrases from a batch of documents.

Returns a list of strings denoting the key phrases in the input text. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff"

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

recognize_entities

Recognize entities for a batch of documents.

Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

recognize_linked_entities

Recognize linked entities from a well-known knowledge base for a batch of documents.

Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

recognize_pii_entities

Recognize entities containing personal information for a batch of documents.

Returns a list of personal information entities ("SSN", "Bank Account", etc) in the document. For the list of supported entity types, check https://aka.ms/tanerpii

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

analyze_sentiment

Analyze sentiment for a batch of documents. Turn on opinion mining with show_opinion_mining.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

New in version v3.1-preview: The show_opinion_mining parameter. The string_index_type parameter.

analyze_sentiment(documents, **kwargs)

Returns

The combined list of AnalyzeSentimentResult and DocumentError in the order the original documents were passed in.

Return type

list[<xref:azure.ai.textanalytics.AnalyzeSentimentResult,azure.ai.textanalytics.DocumentError>]

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError

Examples

Analyze sentiment in a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

   documents = [
       """I had the best day of my life. I decided to go sky-diving and it made me appreciate my whole life so much more.
       I developed a deep-connection with my instructor as well, and I feel as if I've made a life-long friend in her.""",
       """This was a waste of my time. All of the views on this drop are extremely boring, all I saw was grass. 0/10 would
       not recommend to any divers, even first timers.""",
       """This was pretty good! The sights were ok, and I had fun with my instructors! Can't complain too much about my experience""",
       """I only have one word for my experience: WOW!!! I can't believe I have had such a wonderful skydiving company right
       in my backyard this whole time! I will definitely be a repeat customer, and I want to take my grandmother skydiving too,
       I know she'll love it!"""
   ]


   result = text_analytics_client.analyze_sentiment(documents, show_opinion_mining=True)
   docs = [doc for doc in result if not doc.is_error]

   print("Let's visualize the sentiment of each of these documents")
   for idx, doc in enumerate(docs):
       print("Document text: {}".format(documents[idx]))
       print("Overall sentiment: {}".format(doc.sentiment))

begin_analyze_actions

Start a long-running operation to perform a variety of text analysis actions over a batch of documents.

begin_analyze_actions(documents, actions, **kwargs)

Parameters

documents
list[str] or list[TextDocumentInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}.

actions
list[<xref:RecognizeEntitiesAction> or <xref:RecognizePiiEntitiesAction> or <xref:ExtractKeyPhrasesAction> or <xref:RecognizeLinkedEntitiesAction> or <xref:AnalyzeSentimentAction>]
Required

A heterogeneous list of actions to perform on the inputted documents. Each action object encapsulates the parameters used for the particular action type. The outputted action results will be in the same order you inputted your actions. Duplicate actions in list not supported.

display_name
str

An optional display name to set for the requested analysis.

language
str

The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

show_stats
bool

If set to true, response will contain document level statistics.

polling_interval
int

Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 30 seconds.

Returns

An instance of an LROPoller. Call result() on the poller object to return a pageable heterogeneous list of the action results in the order the actions were sent in this method.

Return type

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError or NotImplementedError

Examples

Start a long-running operation to perform a variety of text analysis actions over a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import (
       TextAnalyticsClient,
       RecognizeEntitiesAction,
       RecognizeLinkedEntitiesAction,
       RecognizePiiEntitiesAction,
       ExtractKeyPhrasesAction,
       AnalyzeSentimentAction,
   )

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(
       endpoint=endpoint,
       credential=AzureKeyCredential(key),
   )

   documents = [
       "We went to Contoso Steakhouse located at midtown NYC last week for a dinner party, and we adore the spot! \
       They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) \
       and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! \
       The Sirloin steak I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their \
       online menu at www.contososteakhouse.com, call 312-555-0176 or send email to order@contososteakhouse.com! \
       The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
   ]

   poller = text_analytics_client.begin_analyze_actions(
       documents,
       display_name="Sample Text Analysis",
       actions=[
           RecognizeEntitiesAction(),
           RecognizePiiEntitiesAction(),
           ExtractKeyPhrasesAction(),
           RecognizeLinkedEntitiesAction(),
           AnalyzeSentimentAction()
       ],
   )

   result = poller.result()
   action_results = [action_result for action_result in list(result) if not action_result.is_error]

   first_action_result = action_results[0]
   print("Results of Entities Recognition action:")
   docs = [doc for doc in first_action_result.document_results if not doc.is_error]

   for idx, doc in enumerate(docs):
       print("\nDocument text: {}".format(documents[idx]))
       for entity in doc.entities:
           print("Entity: {}".format(entity.text))
           print("...Category: {}".format(entity.category))
           print("...Confidence Score: {}".format(entity.confidence_score))
           print("...Offset: {}".format(entity.offset))
           print("...Length: {}".format(entity.length))
       print("------------------------------------------")

   second_action_result = action_results[1]
   print("Results of PII Entities Recognition action:")
   docs = [doc for doc in second_action_result.document_results if not doc.is_error]

   for idx, doc in enumerate(docs):
       print("Document text: {}".format(documents[idx]))
       print("Document text with redactions: {}".format(doc.redacted_text))
       for entity in doc.entities:
           print("Entity: {}".format(entity.text))
           print("...Category: {}".format(entity.category))
           print("...Confidence Score: {}\n".format(entity.confidence_score))
           print("...Offset: {}".format(entity.offset))
           print("...Length: {}".format(entity.length))
       print("------------------------------------------")

   third_action_result = action_results[2]
   print("Results of Key Phrase Extraction action:")
   docs = [doc for doc in third_action_result.document_results if not doc.is_error]

   for idx, doc in enumerate(docs):
       print("Document text: {}\n".format(documents[idx]))
       print("Key Phrases: {}\n".format(doc.key_phrases))
       print("------------------------------------------")

   fourth_action_result = action_results[3]
   print("Results of Linked Entities Recognition action:")
   docs = [doc for doc in fourth_action_result.document_results if not doc.is_error]

   for idx, doc in enumerate(docs):
       print("Document text: {}\n".format(documents[idx]))
       for linked_entity in doc.entities:
           print("Entity name: {}".format(linked_entity.name))
           print("...Data source: {}".format(linked_entity.data_source))
           print("...Data source language: {}".format(linked_entity.language))
           print("...Data source entity ID: {}".format(linked_entity.data_source_entity_id))
           print("...Data source URL: {}".format(linked_entity.url))
           print("...Document matches:")
           for match in linked_entity.matches:
               print("......Match text: {}".format(match.text))
               print(".........Confidence Score: {}".format(match.confidence_score))
               print(".........Offset: {}".format(match.offset))
               print(".........Length: {}".format(match.length))
       print("------------------------------------------")

   fifth_action_result = action_results[4]
   print("Results of Sentiment Analysis action:")
   docs = [doc for doc in fifth_action_result.document_results if not doc.is_error]

   for doc in docs:
       print("Overall sentiment: {}".format(doc.sentiment))
       print("Scores: positive={}; neutral={}; negative={} \n".format(
           doc.confidence_scores.positive,
           doc.confidence_scores.neutral,
           doc.confidence_scores.negative,
       ))
       print("------------------------------------------")


begin_analyze_healthcare_entities

Analyze healthcare entities and identify relationships between these entities in a batch of documents.

Entities are associated with references that can be found in existing knowledge bases, such as UMLS, CHV, MSH, etc.

We also extract the relations found between entities, for example in "The subject took 100 mg of ibuprofen", we would extract the relationship between the "100 mg" dosage and the "ibuprofen" medication.

NOTE: this endpoint is currently in gated preview, meaning your subscription needs to be allow-listed for you to use this endpoint. More information about that here: https://aka.ms/text-analytics-health-request-access

begin_analyze_healthcare_entities(documents, **kwargs)

Parameters

documents
list[str] or list[TextDocumentInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}.

model_version
str

This value indicates which model will be used for scoring, e.g. "latest", "2019-10-01". If a model-version is not specified, the API will default to the latest, non-preview version. Currently not working on the service side at time of release, as service will only use the latest model. Service is aware, and once it's been fixed on the service side, the SDK should work automatically. See here for more info: https://aka.ms/text-analytics-model-versioning

show_stats
bool

If set to true, response will contain document level statistics.

string_index_type
str

Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodePoint or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets

polling_interval
int

Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds.

continuation_token
str

A continuation token to restart a poller from a saved state.

disable_service_logs
bool

If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, Text Analytics logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the Text Analytics natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.

Returns

An instance of an AnalyzeHealthcareEntitiesLROPoller. Call result() on the this object to return a pageable of AnalyzeHealthcareEntitiesResultItem.

Return type

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError or NotImplementedError

Examples

Recognize healthcare entities in a batch of documents.


   import os
   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient, HealthcareEntityRelationType, HealthcareEntityRelationRoleType

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(
       endpoint=endpoint,
       credential=AzureKeyCredential(key),
   )

   documents = [
       """
       Patient needs to take 100 mg of ibuprofen, and 3 mg of potassium. Also needs to take
       10 mg of Zocor.
       """,
       """
       Patient needs to take 50 mg of ibuprofen, and 2 mg of Coumadin.
       """
   ]

   poller = text_analytics_client.begin_analyze_healthcare_entities(documents)
   result = poller.result()

   docs = [doc for doc in result if not doc.is_error]

   print("Let's first visualize the outputted healthcare result:")
   for idx, doc in enumerate(docs):
       for entity in doc.entities:
           print("Entity: {}".format(entity.text))
           print("...Normalized Text: {}".format(entity.normalized_text))
           print("...Category: {}".format(entity.category))
           print("...Subcategory: {}".format(entity.subcategory))
           print("...Offset: {}".format(entity.offset))
           print("...Confidence score: {}".format(entity.confidence_score))
           if entity.data_sources is not None:
               print("...Data Sources:")
               for data_source in entity.data_sources:
                   print("......Entity ID: {}".format(data_source.entity_id))
                   print("......Name: {}".format(data_source.name))
           if entity.assertion is not None:
               print("...Assertion:")
               print("......Conditionality: {}".format(entity.assertion.conditionality))
               print("......Certainty: {}".format(entity.assertion.certainty))
               print("......Association: {}".format(entity.assertion.association))
       for relation in doc.entity_relations:
           print("Relation of type: {} has the following roles".format(relation.relation_type))
           for role in relation.roles:
               print("...Role '{}' with entity '{}'".format(role.name, role.entity.text))
       print("------------------------------------------")

   print("Now, let's get all of medication dosage relations from the documents")
   dosage_of_medication_relations = [
       entity_relation
       for doc in docs
       for entity_relation in doc.entity_relations if entity_relation.relation_type == HealthcareEntityRelationType.DOSAGE_OF_MEDICATION
   ]

detect_language

Detect language for a batch of documents.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

detect_language(documents, **kwargs)

Parameters

documents
list[str] or list[DetectLanguageInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and country_hint on a per-item basis you must use as input a list[DetectLanguageInput] or a list of dict representations of DetectLanguageInput, like {"id": "1", "country_hint": "us", "text": "hello world"}.

country_hint
str

Country of origin hint for the entire batch. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Per-document country hints will take precedence over whole batch hints. Defaults to "US". If you don't want to use a country hint, pass the string "none".

model_version
str

Version of the model used on the service side for scoring, e.g. "latest", "2019-10-01". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

show_stats
bool

If set to true, response will contain document level statistics in the statistics field of the document-level response.

disable_service_logs
bool

If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, Text Analytics logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the Text Analytics natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.

Returns

The combined list of DetectLanguageResult and DocumentError in the order the original documents were passed in.

Return type

list[<xref:azure.ai.textanalytics.DetectLanguageResult,azure.ai.textanalytics.DocumentError>]

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError

Examples

Detecting language in a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
   documents = [
       """
       The concierge Paulette was extremely helpful. Sadly when we arrived the elevator was broken, but with Paulette's help we barely noticed this inconvenience.
       She arranged for our baggage to be brought up to our room with no extra charge and gave us a free meal to refurbish all of the calories we lost from
       walking up the stairs :). Can't say enough good things about my experience!
       """,
       """
       最近由于工作压力太大,我们决定去富酒店度假。那儿的温泉实在太舒服了,我跟我丈夫都完全恢复了工作前的青春精神!加油!
       """
   ]

   result = text_analytics_client.detect_language(documents)
   reviewed_docs = [doc for doc in result if not doc.is_error]

   print("Let's see what language each review is in!")

   for idx, doc in enumerate(reviewed_docs):
       print("Review #{} is in '{}', which has ISO639-1 name '{}'\n".format(
           idx, doc.primary_language.name, doc.primary_language.iso6391_name
       ))
       if doc.is_error:
           print(doc.id, doc.error)

extract_key_phrases

Extract key phrases from a batch of documents.

Returns a list of strings denoting the key phrases in the input text. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff"

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

extract_key_phrases(documents, **kwargs)

Parameters

documents
list[str] or list[TextDocumentInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}.

language
str

The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

model_version
str

This value indicates which model will be used for scoring, e.g. "latest", "2019-10-01". If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

show_stats
bool

If set to true, response will contain document level statistics in the statistics field of the document-level response.

disable_service_logs
bool

If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, Text Analytics logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the Text Analytics natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.

Returns

The combined list of ExtractKeyPhrasesResult and DocumentError in the order the original documents were passed in.

Return type

list[<xref:azure.ai.textanalytics.ExtractKeyPhrasesResult,azure.ai.textanalytics.DocumentError>]

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError

Examples

Extract the key phrases in a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
   articles = [
       """
       Washington, D.C. Autumn in DC is a uniquely beautiful season. The leaves fall from the trees
       in a city chockful of forrests, leaving yellow leaves on the ground and a clearer view of the
       blue sky above...
       """,
       """
       Redmond, WA. In the past few days, Microsoft has decided to further postpone the start date of
       its United States workers, due to the pandemic that rages with no end in sight...
       """,
       """
       Redmond, WA. Employees at Microsoft can be excited about the new coffee shop that will open on campus
       once workers no longer have to work remotely...
       """
   ]

   result = text_analytics_client.extract_key_phrases(articles)
   for idx, doc in enumerate(result):
       if not doc.is_error:
           print("Key phrases in article #{}: {}".format(
               idx + 1,
               ", ".join(doc.key_phrases)
           ))

recognize_entities

Recognize entities for a batch of documents.

Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

recognize_entities(documents, **kwargs)

Parameters

documents
list[str] or list[TextDocumentInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}.

language
str

The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

model_version
str

This value indicates which model will be used for scoring, e.g. "latest", "2019-10-01". If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

show_stats
bool

If set to true, response will contain document level statistics in the statistics field of the document-level response.

string_index_type
str

Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodePoint or TextElement_v8`. For additional information see https://aka.ms/text-analytics-offsets

disable_service_logs
bool

If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, Text Analytics logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the Text Analytics natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.

Returns

The combined list of RecognizeEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[<xref:azure.ai.textanalytics.RecognizeEntitiesResult,azure.ai.textanalytics.DocumentError>]

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError

Examples

Recognize entities in a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
   reviews = [
       """I work for Foo Company, and we hired Contoso for our annual founding ceremony. The food
       was amazing and we all can't say enough good words about the quality and the level of service.""",
       """We at the Foo Company re-hired Contoso after all of our past successes with the company.
       Though the food was still great, I feel there has been a quality drop since their last time
       catering for us. Is anyone else running into the same problem?""",
       """Bar Company is over the moon about the service we received from Contoso, the best sliders ever!!!!"""
   ]

   result = text_analytics_client.recognize_entities(reviews)
   result = [review for review in result if not review.is_error]

   for idx, review in enumerate(result):
       for entity in review.entities:
           print("Entity '{}' has category '{}'".format(entity.text, entity.category))

recognize_linked_entities

Recognize linked entities from a well-known knowledge base for a batch of documents.

Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

recognize_linked_entities(documents, **kwargs)

Parameters

documents
list[str] or list[TextDocumentInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}.

language
str

The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

model_version
str

This value indicates which model will be used for scoring, e.g. "latest", "2019-10-01". If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

show_stats
bool

If set to true, response will contain document level statistics in the statistics field of the document-level response.

string_index_type
str

Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodePoint or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets

disable_service_logs
bool

If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, Text Analytics logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the Text Analytics natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.

Returns

The combined list of RecognizeLinkedEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[<xref:azure.ai.textanalytics.RecognizeLinkedEntitiesResult,azure.ai.textanalytics.DocumentError>]

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError

Examples

Recognize linked entities in a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
   documents = [
       """
       Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends,
       Steve Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped
       down as CEO of Microsoft, and was succeeded by Satya Nadella.
       Microsoft originally moved its headquarters to Bellevue, Wahsington in Januaray 1979, but is now
       headquartered in Redmond.
       """
   ]

   result = text_analytics_client.recognize_linked_entities(documents)
   docs = [doc for doc in result if not doc.is_error]

   print(
       "Let's map each entity to it's Wikipedia article. I also want to see how many times each "
       "entity is mentioned in a document\n\n"
   )
   entity_to_url = {}
   for doc in docs:
       for entity in doc.entities:
           print("Entity '{}' has been mentioned '{}' time(s)".format(
               entity.name, len(entity.matches)
           ))
           if entity.data_source == "Wikipedia":
               entity_to_url[entity.name] = entity.url

recognize_pii_entities

Recognize entities containing personal information for a batch of documents.

Returns a list of personal information entities ("SSN", "Bank Account", etc) in the document. For the list of supported entity types, check https://aka.ms/tanerpii

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

recognize_pii_entities(documents, **kwargs)

Parameters

documents
list[str] or list[TextDocumentInput] or list[dict[str, str]]
Required

The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}.

language
str

The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

model_version
str

This value indicates which model will be used for scoring, e.g. "latest", "2019-10-01". If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

show_stats
bool

If set to true, response will contain document level statistics in the statistics field of the document-level response.

domain_filter
str or PiiEntityDomainType

Filters the response entities to ones only included in the specified domain. I.e., if set to 'phi', will only return entities in the Protected Healthcare Information domain. See https://aka.ms/tanerpii for more information.

categories_filter
list[<xref:azure.ai.textanalytics.PiiEntityCategoryType>]

Instead of filtering over all PII entity categories, you can pass in a list of the specific PII entity categories you want to filter out. For example, if you only want to filter out U.S. social security numbers in a document, you can pass in [PiiEntityCategoryType.US_SOCIAL_SECURITY_NUMBER] for this kwarg.

string_index_type
str

Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodePoint or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets

disable_service_logs
bool

If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, Text Analytics logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the Text Analytics natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.

Returns

The combined list of RecognizePiiEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[<xref:azure.ai.textanalytics.RecognizePiiEntitiesResult,azure.ai.textanalytics.DocumentError>]

Exceptions

~azure.core.exceptions.HttpResponseError or TypeError or ValueError

Examples

Recognize personally identifiable information entities in a batch of documents.


   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient

   endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
   key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

   text_analytics_client = TextAnalyticsClient(
       endpoint=endpoint, credential=AzureKeyCredential(key)
   )
   documents = [
       """Parker Doe has repaid all of their loans as of 2020-04-25.
       Their SSN is 859-98-0987. To contact them, use their phone number
       555-555-5555. They are originally from Brazil and have Brazilian CPF number 998.214.865-68"""
   ]

   result = text_analytics_client.recognize_pii_entities(documents)
   docs = [doc for doc in result if not doc.is_error]

   print(
       "Let's compare the original document with the documents after redaction. "
       "I also want to comb through all of the entities that got redacted"
   )
   for idx, doc in enumerate(docs):
       print("Document text: {}".format(documents[idx]))
       print("Redacted document text: {}".format(doc.redacted_text))
       for entity in doc.entities:
           print("...Entity '{}' with category '{}' got redacted".format(
               entity.text, entity.category
           ))