Can not access to AZURE Text Analytics API. "Time out" is raised

foxwentingdos 1 Reputation point
2022-03-08T12:15:56.507+00:00

I was using sentiment analysis function by Azure Text Analytics Client Library for Python-Version 5.1.0. The sample program ran successfully. And then I embedded it in my own code, and tried it. And I found it ran several times and got stuck. Sometimes it raise 'Time out', but in most cases it didn't return anything. I have tried east Asia and some other servers, and it doesn't work.

Azure AI Language
Azure AI Language
An Azure service that provides natural language capabilities including sentiment analysis, entity extraction, and automated question answering.
358 questions
{count} votes

1 answer

Sort by: Most helpful
  1. foxwentingdos 1 Reputation point
    2022-03-09T13:10:46.043+00:00
    import os.path
    from glob import glob
    from tqdm import tqdm
    import csv
    from azure.core.credentials import AzureKeyCredential
    from azure.ai.textanalytics import TextAnalyticsClient
    
    
    endpoint = ''  # Here remove the endpoint and key.
    key = ''
    
    
    class DocUnit(object):
        meta_header = list()
        meta_list = list()
        senti_dict = dict()
    
        def __init__(self, meta_path):
            self._nlp = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
    
            if not DocUnit.meta_list:
                with open(meta_path, 'r') as f:
                    f_csv = csv.reader(f)
                    DocUnit.meta_header = next(f_csv)
                    for row in f_csv:
                        DocUnit.meta_list.append(row)
    
        def __call__(self, file_name):
            with open(file_name, 'r') as f:
                doc = f.read()
    
            senti_dict = {'positive': 1, 'negative': -1, 'neutral': 0}
            docs_split = [i for i in doc.split('\n') if i != '']
            scores = list()
            idx = 0
            while True:
                try:
                    docs = self._nlp.analyze_sentiment(docs_split[idx * 10: (idx + 1) * 10])
                    scores += [senti_dict[sentence.sentiment] for doc in docs for sentence in doc.sentences]
                    idx += 1
                except Exception as es:
                    print('Time out!!!')
                    # raise es
    
                if idx >= (len(docs_split) - 1) // 10 + 1:
                    break
    
            num_neu = scores.count(0)
            num_neg = scores.count(-1)
            num_pos = scores.count(1)
            mean_score = sum(scores) / len(scores)
    
            doc_name = os.path.splitext(os.path.split(file_name)[1])[0]
    
            DocUnit.senti_dict[doc_name] = [num_neu, num_pos, num_neg, mean_score]
    
    
    if __name__ == '__main__':
        doc_unit = DocUnit('sentiment_paragraph.csv')
        file_names = glob('./texts/*.txt')
        for file_name in tqdm(file_names):
            doc_unit(file_name)
    
    0 comments No comments