Quickstart: get started with the client library SDKs or REST API

Get started with Azure Form Recognizer using the programming language of your choice. Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, table data, and more from your form documents—the service outputs structured data that includes the relationships in the original file. You can use Form Recognizer via the REST API or SDK. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.

You'll use the following APIs to extract structured data from forms and documents:

Important

  • This quickstart uses SDK version 3.1.1 and targets API version 2.1.

  • The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons.

Reference documentation | Library source code | Package (NuGet) | Samples

A full working sample of this C# quickstart is available on GitHub.

Prerequisites

  • Azure subscription - Create one for free
  • The Visual Studio IDE or current version of .NET Core.
  • An Azure Storage blob that contains a set of training data. See Build a training data set for a custom model for tips and options for putting together your training data set. For this quickstart, you can use the files under the Train folder of the sample data set (download and extract sample_data.zip).
  • Once you have your Azure subscription, create a Form Recognizer resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console app with the name formrecognizer-quickstart. This command creates a simple "Hello World" C# project with a single source file: program.cs.

dotnet new console -n formrecognizer-quickstart

Change your directory to the newly created app folder. You can build the application with:

dotnet build

The build output should contain no warnings or errors.

...
Build succeeded.
 0 Warning(s)
 0 Error(s)
...

Install the client library

Within the application directory, install the Form Recognizer client library for .NET with the following command:

dotnet add package Azure.AI.FormRecognizer --version 3.1.1

From the project directory, open the Program.cs file in your preferred editor or IDE. Add the following using directives:

using Azure;
using Azure.AI.FormRecognizer;  
using Azure.AI.FormRecognizer.Models;
using Azure.AI.FormRecognizer.Training;

using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;

In the application's Program class, create variables for your resource's key and endpoint.

Important

Go to the Azure portal. If the Form Recognizer resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See the Cognitive Services security article for more information.

private static readonly string endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
private static readonly string apiKey = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
private static readonly AzureKeyCredential credential = new AzureKeyCredential(apiKey);

In the application's Main method, add a call to the asynchronous tasks used in this quickstart. You will implement them later:

static void Main(string[] args) {
  // new code:
  var recognizeContent = RecognizeContent(recognizerClient);
  Task.WaitAll(recognizeContent);

  var analyzeReceipt = AnalyzeReceipt(recognizerClient, receiptUrl);
  Task.WaitAll(analyzeReceipt);

  var analyzeBusinessCard = AnalyzeBusinessCard(recognizerClient, bcUrl);
  Task.WaitAll(analyzeBusinessCard);

  var analyzeInvoice = AnalyzeInvoice(recognizerClient, invoiceUrl);
  Task.WaitAll(analyzeInvoice);

  var analyzeId = AnalyzeId(recognizerClient, idUrl);
  Task.WaitAll(analyzeId);

  var trainModel = TrainModel(trainingClient, trainingDataUrl);
  Task.WaitAll(trainModel);

  var trainModelWithLabels = TrainModelWithLabels(trainingClient, trainingDataUrl);
  Task.WaitAll(trainModel);

  var analyzeForm = AnalyzePdfForm(recognizerClient, modelId, formUrl);
  Task.WaitAll(analyzeForm);

  var manageModels = ManageModels(trainingClient, trainingDataUrl);
  Task.WaitAll(manageModels);

}

Object model

With Form Recognizer, you can create two different client types. The first, FormRecognizerClient is used to query the service to recognized form fields and content. The second, FormTrainingClient is uses to create and manage custom models to improve recognition.

FormRecognizerClient

FormRecognizerClient provides operations for:

  • Recognizing form fields and content, using custom models trained to analyze your custom forms. These values are returned in a collection of RecognizedForm objects. See example Analyze custom forms.
  • Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of FormPage objects. See example Analyze layout.
  • Recognizing common fields from US receipts, business cards, invoices, and identity documents using a pre-trained model on the Form Recognizer service.

FormTrainingClient

FormTrainingClient provides operations for:

  • Training custom models to analyze all fields and values found in your custom forms. A CustomFormModel is returned indicating the form types the model will analyze, and the fields it will extract for each form type.
  • Training custom models to analyze specific fields and values you specify by labeling your custom forms. A CustomFormModel is returned indicating the fields the model will extract, as well as the estimated accuracy for each field.
  • Managing models created in your account.
  • Copying a custom model from one Form Recognizer resource to another.

See examples for Train a Model and Manage Custom Models.

Note

Models can also be trained using a graphical user interface such as the Form Recognizer Labeling Tool.

Authenticate the client

Below Main, create a new method named AuthenticateClient. You'll use this in other tasks to authenticate your requests to the Form Recognizer service. This method uses the AzureKeyCredential object, so that if needed, you can update the API key without creating new client objects.

Important

Get your key and endpoint from the Azure portal. If the Form Recognizer resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For example, Azure key vault.

private static FormRecognizerClient AuthenticateClient()
{
    var credential = new AzureKeyCredential(apiKey);
    var client = new FormRecognizerClient(new Uri(endpoint), credential);
    return client;
}

Repeat the steps above for a new method that authenticates a training client.

static private FormTrainingClient AuthenticateTrainingClient()
{
    var credential = new AzureKeyCredential(apiKey);
    var client = new FormTrainingClient(new Uri(endpoint), credential);
    return client;
}

Get assets for testing

You'll also need to add references to the URLs for your training and testing data. Add these to the root of your Program class.

    • To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the Storage Explorer tab. Navigate to your container, right-click, and select Get shared access signature. It's important to get the SAS for your container, not for the storage account itself. Make sure the Read, Write, Delete and List permissions are checked, and click Create. Then copy the value in the URL section to a temporary location. It should have the form: https://<storage account>.blob.core.windows.net/<container name>?<SAS value>.

    SAS URL retrieval

  • Then, repeat the above steps to get the SAS URL of an individual document in blob storage container. Save it to a temporary location as well.

  • Finally, save the URL of the sample image(s) included below (also available on GitHub).

string trainingDataUrl = "PASTE_YOUR_SAS_URL_OF_YOUR_FORM_FOLDER_IN_BLOB_STORAGE_HERE";
string formUrl = "PASTE_YOUR_FORM_RECOGNIZER_FORM_URL_HERE";
string receiptUrl = "https://docs.microsoft.com/azure/cognitive-services/form-recognizer/media" + "/contoso-allinone.jpg";
string bcUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg";
string invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/simple-invoice.png";

string idUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/id-license.jpg";

Analyze layout

You can use Form Recognizer to analyze tables, lines, and words in documents, without needing to train a model. The returned value is a collection of FormPage objects: one for each page in the submitted document. For more information about layout extraction see the Layout conceptual guide.

To analyze the content of a file at a given URL, use the StartRecognizeContentFromUri method.

private static async Task RecognizeContent(FormRecognizerClient recognizerClient)
{
    var invoiceUri = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/simple-invoice.png";
    FormPageCollection formPages = await recognizerClient
        .StartRecognizeContentFromUri(new Uri(invoiceUri))
        .WaitForCompletionAsync();

Tip

You can also get content from a local file. See the FormRecognizerClient methods, such as StartRecognizeContent. Or, see the sample code on GitHub for scenarios involving local images.

The rest of this task prints the content information to the console.

    foreach (FormPage page in formPages)
    {
        Console.WriteLine($"Form Page {page.PageNumber} has {page.Lines.Count} lines.");

        for (int i = 0; i < page.Lines.Count; i++)
        {
            FormLine line = page.Lines[i];
            Console.WriteLine($"    Line {i} has {line.Words.Count} word{(line.Words.Count > 1 ? "s" : "")}, and text: '{line.Text}'.");
        }

        for (int i = 0; i < page.Tables.Count; i++)
        {
            FormTable table = page.Tables[i];
            Console.WriteLine($"Table {i} has {table.RowCount} rows and {table.ColumnCount} columns.");
            foreach (FormTableCell cell in table.Cells)
            {
                Console.WriteLine($"    Cell ({cell.RowIndex}, {cell.ColumnIndex}) contains text: '{cell.Text}'.");
            }
        }
    }
}

Output

Form Page 1 has 18 lines.
    Line 0 has 1 word, and text: 'Contoso'.
    Line 1 has 1 word, and text: 'Address:'.
    Line 2 has 3 words, and text: 'Invoice For: Microsoft'.
    Line 3 has 4 words, and text: '1 Redmond way Suite'.
    Line 4 has 3 words, and text: '1020 Enterprise Way'.
    Line 5 has 3 words, and text: '6000 Redmond, WA'.
    Line 6 has 3 words, and text: 'Sunnayvale, CA 87659'.
    Line 7 has 1 word, and text: '99243'.
    Line 8 has 2 words, and text: 'Invoice Number'.
    Line 9 has 2 words, and text: 'Invoice Date'.
    Line 10 has 3 words, and text: 'Invoice Due Date'.
    Line 11 has 1 word, and text: 'Charges'.
    Line 12 has 2 words, and text: 'VAT ID'.
    Line 13 has 1 word, and text: '34278587'.
    Line 14 has 1 word, and text: '6/18/2017'.
    Line 15 has 1 word, and text: '6/24/2017'.
    Line 16 has 1 word, and text: '$56,651.49'.
    Line 17 has 1 word, and text: 'PT'.
Table 0 has 2 rows and 6 columns.
    Cell (0, 0) contains text: 'Invoice Number'.
    Cell (0, 1) contains text: 'Invoice Date'.
    Cell (0, 2) contains text: 'Invoice Due Date'.
    Cell (0, 3) contains text: 'Charges'.
    Cell (0, 5) contains text: 'VAT ID'.
    Cell (1, 0) contains text: '34278587'.
    Cell (1, 1) contains text: '6/18/2017'.
    Cell (1, 2) contains text: '6/24/2017'.
    Cell (1, 3) contains text: '$56,651.49'.
    Cell (1, 5) contains text: 'PT'.

Analyze receipts

This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the Receipts conceptual guide.

To analyze receipts from a URL, use the StartRecognizeReceiptsFromUri method.

private static async Task AnalyzeReceipt(
    FormRecognizerClient recognizerClient, string receiptUri)
{
    RecognizedFormCollection receipts = await recognizerClient.StartRecognizeReceiptsFromUri(new Uri(receiptUrl)).WaitForCompletionAsync();

Tip

You can also analyze local receipt images. See the FormRecognizerClient methods, such as StartRecognizeReceipts. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of RecognizedForm objects: one for each page in the submitted document. The following code processes the receipt at the given URI and prints the major fields and values to the console.

    foreach (RecognizedForm receipt in receipts)
    {
        FormField merchantNameField;
        if (receipt.Fields.TryGetValue("MerchantName", out merchantNameField))
        {
            if (merchantNameField.Value.ValueType == FieldValueType.String)
            {
                string merchantName = merchantNameField.Value.AsString();

                Console.WriteLine($"Merchant Name: '{merchantName}', with confidence {merchantNameField.Confidence}");
            }
        }

        FormField transactionDateField;
        if (receipt.Fields.TryGetValue("TransactionDate", out transactionDateField))
        {
            if (transactionDateField.Value.ValueType == FieldValueType.Date)
            {
                DateTime transactionDate = transactionDateField.Value.AsDate();

                Console.WriteLine($"Transaction Date: '{transactionDate}', with confidence {transactionDateField.Confidence}");
            }
        }

        FormField itemsField;
        if (receipt.Fields.TryGetValue("Items", out itemsField))
        {
            if (itemsField.Value.ValueType == FieldValueType.List)
            {
                foreach (FormField itemField in itemsField.Value.AsList())
                {
                    Console.WriteLine("Item:");

                    if (itemField.Value.ValueType == FieldValueType.Dictionary)
                    {
                        IReadOnlyDictionary<string, FormField> itemFields = itemField.Value.AsDictionary();

                        FormField itemNameField;
                        if (itemFields.TryGetValue("Name", out itemNameField))
                        {
                            if (itemNameField.Value.ValueType == FieldValueType.String)
                            {
                                string itemName = itemNameField.Value.AsString();

                                Console.WriteLine($"    Name: '{itemName}', with confidence {itemNameField.Confidence}");
                            }
                        }

                        FormField itemTotalPriceField;
                        if (itemFields.TryGetValue("TotalPrice", out itemTotalPriceField))
                        {
                            if (itemTotalPriceField.Value.ValueType == FieldValueType.Float)
                            {
                                float itemTotalPrice = itemTotalPriceField.Value.AsFloat();

                                Console.WriteLine($"    Total Price: '{itemTotalPrice}', with confidence {itemTotalPriceField.Confidence}");
                            }
                        }
                    }
                }
            }
        }
        FormField totalField;
        if (receipt.Fields.TryGetValue("Total", out totalField))
        {
            if (totalField.Value.ValueType == FieldValueType.Float)
            {
                float total = totalField.Value.AsFloat();

                Console.WriteLine($"Total: '{total}', with confidence '{totalField.Confidence}'");
            }
        }
    }
}

Output

Form Page 1 has 18 lines.
    Line 0 has 1 word, and text: 'Contoso'.
    Line 1 has 1 word, and text: 'Address:'.
    Line 2 has 3 words, and text: 'Invoice For: Microsoft'.
    Line 3 has 4 words, and text: '1 Redmond way Suite'.
    Line 4 has 3 words, and text: '1020 Enterprise Way'.
    Line 5 has 3 words, and text: '6000 Redmond, WA'.
    Line 6 has 3 words, and text: 'Sunnayvale, CA 87659'.
    Line 7 has 1 word, and text: '99243'.
    Line 8 has 2 words, and text: 'Invoice Number'.
    Line 9 has 2 words, and text: 'Invoice Date'.
    Line 10 has 3 words, and text: 'Invoice Due Date'.
    Line 11 has 1 word, and text: 'Charges'.
    Line 12 has 2 words, and text: 'VAT ID'.
    Line 13 has 1 word, and text: '34278587'.
    Line 14 has 1 word, and text: '6/18/2017'.
    Line 15 has 1 word, and text: '6/24/2017'.
    Line 16 has 1 word, and text: '$56,651.49'.
    Line 17 has 1 word, and text: 'PT'.
Table 0 has 2 rows and 6 columns.
    Cell (0, 0) contains text: 'Invoice Number'.
    Cell (0, 1) contains text: 'Invoice Date'.
    Cell (0, 2) contains text: 'Invoice Due Date'.
    Cell (0, 3) contains text: 'Charges'.
    Cell (0, 5) contains text: 'VAT ID'.
    Cell (1, 0) contains text: '34278587'.
    Cell (1, 1) contains text: '6/18/2017'.
    Cell (1, 2) contains text: '6/24/2017'.
    Cell (1, 3) contains text: '$56,651.49'.
    Cell (1, 5) contains text: 'PT'.
Merchant Name: 'Contoso Contoso', with confidence 0.516
Transaction Date: '6/10/2019 12:00:00 AM', with confidence 0.985
Item:
    Name: '8GB RAM (Black)', with confidence 0.916
    Total Price: '999', with confidence 0.559
Item:
    Name: 'SurfacePen', with confidence 0.858
    Total Price: '99.99', with confidence 0.386
Total: '1203.39', with confidence '0.774'

Analyze business cards

This section demonstrates how to analyze and extract common fields from English business cards, using a pre-trained model. For more information about business card analysis, see the Business cards conceptual guide.

To analyze business cards from a URL, use the StartRecognizeBusinessCardsFromUriAsync method.

private static async Task AnalyzeBusinessCard(
FormRecognizerClient recognizerClient, string bcUrl) {
  RecognizedFormCollection businessCards = await recognizerClient.StartRecognizeBusinessCardsFromUriAsync(bcUrl).WaitForCompletionAsync();

Tip

You can also analyze local receipt images. See the FormRecognizerClient methods, such as StartRecognizeBusinessCards. Or, see the sample code on GitHub for scenarios involving local images.

The following code processes the business card at the given URI and prints the major fields and values to the console.

  foreach(RecognizedForm businessCard in businessCards) {
    FormField ContactNamesField;
    if (businessCard.Fields.TryGetValue("ContactNames", out ContactNamesField)) {
      if (ContactNamesField.Value.ValueType == FieldValueType.List) {
        foreach(FormField contactNameField in ContactNamesField.Value.AsList()) {
          Console.WriteLine($ "Contact Name: {contactNameField.ValueData.Text}");

          if (contactNameField.Value.ValueType == FieldValueType.Dictionary) {
            IReadOnlyDictionary < string,
            FormField > contactNameFields = contactNameField.Value.AsDictionary();

            FormField firstNameField;
            if (contactNameFields.TryGetValue("FirstName", out firstNameField)) {
              if (firstNameField.Value.ValueType == FieldValueType.String) {
                string firstName = firstNameField.Value.AsString();

                Console.WriteLine($ "    First Name: '{firstName}', with confidence {firstNameField.Confidence}");
              }
            }

            FormField lastNameField;
            if (contactNameFields.TryGetValue("LastName", out lastNameField)) {
              if (lastNameField.Value.ValueType == FieldValueType.String) {
                string lastName = lastNameField.Value.AsString();

                Console.WriteLine($ "    Last Name: '{lastName}', with confidence {lastNameField.Confidence}");
              }
            }
          }
        }
      }
    }

    FormField jobTitlesFields;
    if (businessCard.Fields.TryGetValue("JobTitles", out jobTitlesFields)) {
      if (jobTitlesFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField jobTitleField in jobTitlesFields.Value.AsList()) {
          if (jobTitleField.Value.ValueType == FieldValueType.String) {
            string jobTitle = jobTitleField.Value.AsString();

            Console.WriteLine($ "  Job Title: '{jobTitle}', with confidence {jobTitleField.Confidence}");
          }
        }
      }
    }

    FormField departmentFields;
    if (businessCard.Fields.TryGetValue("Departments", out departmentFields)) {
      if (departmentFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField departmentField in departmentFields.Value.AsList()) {
          if (departmentField.Value.ValueType == FieldValueType.String) {
            string department = departmentField.Value.AsString();

            Console.WriteLine($ "  Department: '{department}', with confidence {departmentField.Confidence}");
          }
        }
      }
    }

    FormField emailFields;
    if (businessCard.Fields.TryGetValue("Emails", out emailFields)) {
      if (emailFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField emailField in emailFields.Value.AsList()) {
          if (emailField.Value.ValueType == FieldValueType.String) {
            string email = emailField.Value.AsString();

            Console.WriteLine($ "  Email: '{email}', with confidence {emailField.Confidence}");
          }
        }
      }
    }

    FormField websiteFields;
    if (businessCard.Fields.TryGetValue("Websites", out websiteFields)) {
      if (websiteFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField websiteField in websiteFields.Value.AsList()) {
          if (websiteField.Value.ValueType == FieldValueType.String) {
            string website = websiteField.Value.AsString();

            Console.WriteLine($ "  Website: '{website}', with confidence {websiteField.Confidence}");
          }
        }
      }
    }

    FormField mobilePhonesFields;
    if (businessCard.Fields.TryGetValue("MobilePhones", out mobilePhonesFields)) {
      if (mobilePhonesFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField mobilePhoneField in mobilePhonesFields.Value.AsList()) {
          if (mobilePhoneField.Value.ValueType == FieldValueType.PhoneNumber) {
            string mobilePhone = mobilePhoneField.Value.AsPhoneNumber();

            Console.WriteLine($ "  Mobile phone number: '{mobilePhone}', with confidence {mobilePhoneField.Confidence}");
          }
        }
      }
    }

    FormField otherPhonesFields;
    if (businessCard.Fields.TryGetValue("OtherPhones", out otherPhonesFields)) {
      if (otherPhonesFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField otherPhoneField in otherPhonesFields.Value.AsList()) {
          if (otherPhoneField.Value.ValueType == FieldValueType.PhoneNumber) {
            string otherPhone = otherPhoneField.Value.AsPhoneNumber();

            Console.WriteLine($ "  Other phone number: '{otherPhone}', with confidence {otherPhoneField.Confidence}");
          }
        }
      }
    }

    FormField faxesFields;
    if (businessCard.Fields.TryGetValue("Faxes", out faxesFields)) {
      if (faxesFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField faxField in faxesFields.Value.AsList()) {
          if (faxField.Value.ValueType == FieldValueType.PhoneNumber) {
            string fax = faxField.Value.AsPhoneNumber();

            Console.WriteLine($ "  Fax phone number: '{fax}', with confidence {faxField.Confidence}");
          }
        }
      }
    }

    FormField addressesFields;
    if (businessCard.Fields.TryGetValue("Addresses", out addressesFields)) {
      if (addressesFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField addressField in addressesFields.Value.AsList()) {
          if (addressField.Value.ValueType == FieldValueType.String) {
            string address = addressField.Value.AsString();

            Console.WriteLine($ "  Address: '{address}', with confidence {addressField.Confidence}");
          }
        }
      }
    }

    FormField companyNamesFields;
    if (businessCard.Fields.TryGetValue("CompanyNames", out companyNamesFields)) {
      if (companyNamesFields.Value.ValueType == FieldValueType.List) {
        foreach(FormField companyNameField in companyNamesFields.Value.AsList()) {
          if (companyNameField.Value.ValueType == FieldValueType.String) {
            string companyName = companyNameField.Value.AsString();

            Console.WriteLine($ "  Company name: '{companyName}', with confidence {companyNameField.Confidence}");
          }
        }
      }
    }
  }
}

Analyze invoices

This section demonstrates how to analyze and extract common fields from sales invoices, using a pre-trained model. For more information about invoice analysis, see the Invoice conceptual guide.

To analyze invoices from a URL, use the StartRecognizeInvoicesFromUriAsync method.

private static async Task AnalyzeInvoice(
FormRecognizerClient recognizerClient, string invoiceUrl) {
  var options = new RecognizeInvoicesOptions() {
    Locale = "en-US"
  };
  RecognizedFormCollection invoices = await recognizerClient.StartRecognizeInvoicesFromUriAsync(invoiceUrl, options).WaitForCompletionAsync();

Tip

You can also analyze local invoice images. See the FormRecognizerClient methods, such as StartRecognizeInvoices. Or, see the sample code on GitHub for scenarios involving local images.

The following code processes the invoice at the given URI and prints the major fields and values to the console.

  RecognizedForm invoice = invoices.Single();

  FormField invoiceIdField;
  if (invoice.Fields.TryGetValue("InvoiceId", out invoiceIdField)) {
    if (invoiceIdField.Value.ValueType == FieldValueType.String) {
      string invoiceId = invoiceIdField.Value.AsString();
      Console.WriteLine($ "    Invoice Id: '{invoiceId}', with confidence {invoiceIdField.Confidence}");
    }
  }

  FormField invoiceDateField;
  if (invoice.Fields.TryGetValue("InvoiceDate", out invoiceDateField)) {
    if (invoiceDateField.Value.ValueType == FieldValueType.Date) {
      DateTime invoiceDate = invoiceDateField.Value.AsDate();
      Console.WriteLine($ "    Invoice Date: '{invoiceDate}', with confidence {invoiceDateField.Confidence}");
    }
  }

  FormField dueDateField;
  if (invoice.Fields.TryGetValue("DueDate", out dueDateField)) {
    if (dueDateField.Value.ValueType == FieldValueType.Date) {
      DateTime dueDate = dueDateField.Value.AsDate();
      Console.WriteLine($ "    Due Date: '{dueDate}', with confidence {dueDateField.Confidence}");
    }
  }

  FormField vendorNameField;
  if (invoice.Fields.TryGetValue("VendorName", out vendorNameField)) {
    if (vendorNameField.Value.ValueType == FieldValueType.String) {
      string vendorName = vendorNameField.Value.AsString();
      Console.WriteLine($ "    Vendor Name: '{vendorName}', with confidence {vendorNameField.Confidence}");
    }
  }

  FormField vendorAddressField;
  if (invoice.Fields.TryGetValue("VendorAddress", out vendorAddressField)) {
    if (vendorAddressField.Value.ValueType == FieldValueType.String) {
      string vendorAddress = vendorAddressField.Value.AsString();
      Console.WriteLine($ "    Vendor Address: '{vendorAddress}', with confidence {vendorAddressField.Confidence}");
    }
  }

  FormField customerNameField;
  if (invoice.Fields.TryGetValue("CustomerName", out customerNameField)) {
    if (customerNameField.Value.ValueType == FieldValueType.String) {
      string customerName = customerNameField.Value.AsString();
      Console.WriteLine($ "    Customer Name: '{customerName}', with confidence {customerNameField.Confidence}");
    }
  }

  FormField customerAddressField;
  if (invoice.Fields.TryGetValue("CustomerAddress", out customerAddressField)) {
    if (customerAddressField.Value.ValueType == FieldValueType.String) {
      string customerAddress = customerAddressField.Value.AsString();
      Console.WriteLine($ "    Customer Address: '{customerAddress}', with confidence {customerAddressField.Confidence}");
    }
  }

  FormField customerAddressRecipientField;
  if (invoice.Fields.TryGetValue("CustomerAddressRecipient", out customerAddressRecipientField)) {
    if (customerAddressRecipientField.Value.ValueType == FieldValueType.String) {
      string customerAddressRecipient = customerAddressRecipientField.Value.AsString();
      Console.WriteLine($ "    Customer address recipient: '{customerAddressRecipient}', with confidence {customerAddressRecipientField.Confidence}");
    }
  }

  FormField invoiceTotalField;
  if (invoice.Fields.TryGetValue("InvoiceTotal", out invoiceTotalField)) {
    if (invoiceTotalField.Value.ValueType == FieldValueType.Float) {
      float invoiceTotal = invoiceTotalField.Value.AsFloat();
      Console.WriteLine($ "    Invoice Total: '{invoiceTotal}', with confidence {invoiceTotalField.Confidence}");
    }
  }
}

Analyze identity documents

This section demonstrates how to analyze and extract key information from government-issued identification documents—worldwide passports and U.S. driver's licenses—using the Form Recognizer prebuilt ID model. For more information about identity document analysis, see our prebuilt identification model conceptual guide.

To analyze identity documents from a URI use the StartRecognizeIdentityDocumentsFromUriAsync method.

private static async Task AnalyzeId(
FormRecognizerClient recognizerClient, string idUrl) {
  RecognizedFormCollection identityDocument = await recognizerClient.StartRecognizeIdDocumentsFromUriAsync(idUrl).WaitForCompletionAsync();

Tip

You can also analyze local identity document images. See the FormRecognizerClient methods, such as StartRecognizeIdentityDocumentsAsync. Also, see the sample code on GitHub for scenarios involving local images.

The following code processes the identity document at the given URI and prints the major fields and values to the console.

RecognizedForm identityDocument = identityDocuments.Single();

if (identityDocument.Fields.TryGetValue("Address", out FormField addressField)) {
  if (addressField.Value.ValueType == FieldValueType.String) {
    string address = addressField.Value.AsString();
    Console.WriteLine($ "Address: '{address}', with confidence {addressField.Confidence}");
  }
}

if (identityDocument.Fields.TryGetValue("CountryRegion", out FormField countryRegionField)) {
  if (countryRegionField.Value.ValueType == FieldValueType.CountryRegion) {
    string countryRegion = countryRegionField.Value.AsCountryRegion();
    Console.WriteLine($ "CountryRegion: '{countryRegion}', with confidence {countryRegionField.Confidence}");
  }
}

if (identityDocument.Fields.TryGetValue("DateOfBirth", out FormField dateOfBirthField)) {
  if (dateOfBirthField.Value.ValueType == FieldValueType.Date) {
    DateTime dateOfBirth = dateOfBirthField.Value.AsDate();
    Console.WriteLine($ "Date Of Birth: '{dateOfBirth}', with confidence {dateOfBirthField.Confidence}");
  }
}

if (identityDocument.Fields.TryGetValue("DateOfExpiration", out FormField dateOfExpirationField)) {
  if (dateOfExpirationField.Value.ValueType == FieldValueType.Date) {
    DateTime dateOfExpiration = dateOfExpirationField.Value.AsDate();
    Console.WriteLine($ "Date Of Expiration: '{dateOfExpiration}', with confidence {dateOfExpirationField.Confidence}");
  }
}

if (identityDocument.Fields.TryGetValue("DocumentNumber", out FormField documentNumberField)) {
  if (documentNumberField.Value.ValueType == FieldValueType.String) {
    string documentNumber = documentNumberField.Value.AsString();
    Console.WriteLine($ "Document Number: '{documentNumber}', with confidence {documentNumberField.Confidence}");
  }
  RecognizedForm identityDocument = identityDocuments.Single();

  if (identityDocument.Fields.TryGetValue("Address", out FormField addressField)) {
    if (addressField.Value.ValueType == FieldValueType.String) {
      string address = addressField.Value.AsString();
      Console.WriteLine($ "Address: '{address}', with confidence {addressField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("CountryRegion", out FormField countryRegionField)) {
    if (countryRegionField.Value.ValueType == FieldValueType.CountryRegion) {
      string countryRegion = countryRegionField.Value.AsCountryRegion();
      Console.WriteLine($ "CountryRegion: '{countryRegion}', with confidence {countryRegionField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("DateOfBirth", out FormField dateOfBirthField)) {
    if (dateOfBirthField.Value.ValueType == FieldValueType.Date) {
      DateTime dateOfBirth = dateOfBirthField.Value.AsDate();
      Console.WriteLine($ "Date Of Birth: '{dateOfBirth}', with confidence {dateOfBirthField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("DateOfExpiration", out FormField dateOfExpirationField)) {
    if (dateOfExpirationField.Value.ValueType == FieldValueType.Date) {
      DateTime dateOfExpiration = dateOfExpirationField.Value.AsDate();
      Console.WriteLine($ "Date Of Expiration: '{dateOfExpiration}', with confidence {dateOfExpirationField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("DocumentNumber", out FormField documentNumberField)) {
    if (documentNumberField.Value.ValueType == FieldValueType.String) {
      string documentNumber = documentNumberField.Value.AsString();
      Console.WriteLine($ "Document Number: '{documentNumber}', with confidence {documentNumberField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("FirstName", out FormField firstNameField)) {
    if (firstNameField.Value.ValueType == FieldValueType.String) {
      string firstName = firstNameField.Value.AsString();
      Console.WriteLine($ "First Name: '{firstName}', with confidence {firstNameField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("LastName", out FormField lastNameField)) {
    if (lastNameField.Value.ValueType == FieldValueType.String) {
      string lastName = lastNameField.Value.AsString();
      Console.WriteLine($ "Last Name: '{lastName}', with confidence {lastNameField.Confidence}");
    }
  }

  if (identityDocument.Fields.TryGetValue("Region", out FormField regionfield)) {
    if (regionfield.Value.ValueType == FieldValueType.String) {
      string region = regionfield.Value.AsString();
      Console.WriteLine($ "Region: '{region}', with confidence {regionfield.Confidence}");
    }
  }

Train a custom model

This section demonstrates how to train a model with your own data. A trained model can output structured data that includes the key/value relationships in the original form document. After you train the model, you can test and retrain it and eventually use it to reliably extract data from more forms according to your needs.

Note

You can also train models with a graphical user interface such as the Form Recognizer sample labeling tool.

Train a model without labels

Train custom models to analyze all the fields and values found in your custom forms without manually labeling the training documents. The following method trains a model on a given set of documents and prints the model's status to the console.

private static async Task<String> TrainModel(
    FormTrainingClient trainingClient, string trainingDataUrl)
{
    CustomFormModel model = await trainingClient
    .StartTrainingAsync(new Uri(trainingDataUrl), useTrainingLabels: false)
    .WaitForCompletionAsync();

    Console.WriteLine($"Custom Model Info:");
    Console.WriteLine($"    Model Id: {model.ModelId}");
    Console.WriteLine($"    Model Status: {model.Status}");
    Console.WriteLine($"    Training model started on: {model.TrainingStartedOn}");
    Console.WriteLine($"    Training model completed on: {model.TrainingCompletedOn}");

The returned CustomFormModel object contains information on the form types the model can analyze and the fields it can extract from each form type. The following code block prints this information to the console.

foreach (CustomFormSubmodel submodel in model.Submodels)
{
    Console.WriteLine($"Submodel Form Type: {submodel.FormType}");
    foreach (CustomFormModelField field in submodel.Fields.Values)
    {
        Console.Write($"    FieldName: {field.Name}");
        if (field.Label != null)
        {
            Console.Write($", FieldLabel: {field.Label}");
        }
        Console.WriteLine("");
    }
}

Finally, return the trained model ID for use in later steps.

    return model.ModelId;
}

Output

This response has been truncated for readability.

Merchant Name: 'Contoso Contoso', with confidence 0.516
Transaction Date: '6/10/2019 12:00:00 AM', with confidence 0.985
Item:
    Name: '8GB RAM (Black)', with confidence 0.916
    Total Price: '999', with confidence 0.559
Item:
    Name: 'SurfacePen', with confidence 0.858
    Total Price: '99.99', with confidence 0.386
Total: '1203.39', with confidence '0.774'
Form Page 1 has 18 lines.
    Line 0 has 1 word, and text: 'Contoso'.
    Line 1 has 1 word, and text: 'Address:'.
    Line 2 has 3 words, and text: 'Invoice For: Microsoft'.
    Line 3 has 4 words, and text: '1 Redmond way Suite'.
    Line 4 has 3 words, and text: '1020 Enterprise Way'.
    ...
Table 0 has 2 rows and 6 columns.
    Cell (0, 0) contains text: 'Invoice Number'.
    Cell (0, 1) contains text: 'Invoice Date'.
    Cell (0, 2) contains text: 'Invoice Due Date'.
    Cell (0, 3) contains text: 'Charges'.
    ...
Custom Model Info:
    Model Id: 95035721-f19d-40eb-8820-0c806b42798b
    Model Status: Ready
    Training model started on: 8/24/2020 6:36:44 PM +00:00
    Training model completed on: 8/24/2020 6:36:50 PM +00:00
Submodel Form Type: form-95035721-f19d-40eb-8820-0c806b42798b
    FieldName: CompanyAddress
    FieldName: CompanyName
    FieldName: CompanyPhoneNumber
    ...
Custom Model Info:
    Model Id: e7a1181b-1fb7-40be-bfbe-1ee154183633
    Model Status: Ready
    Training model started on: 8/24/2020 6:36:44 PM +00:00
    Training model completed on: 8/24/2020 6:36:52 PM +00:00
Submodel Form Type: form-0
    FieldName: field-0, FieldLabel: Additional Notes:
    FieldName: field-1, FieldLabel: Address:
    FieldName: field-2, FieldLabel: Company Name:
    FieldName: field-3, FieldLabel: Company Phone:
    FieldName: field-4, FieldLabel: Dated As:
    FieldName: field-5, FieldLabel: Details
    FieldName: field-6, FieldLabel: Email:
    FieldName: field-7, FieldLabel: Hero Limited
    FieldName: field-8, FieldLabel: Name:
    FieldName: field-9, FieldLabel: Phone:
    ...

Train a model with labels

You can also train custom models by manually labeling the training documents. Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (\<filename\>.pdf.labels.json) in your blob storage container alongside the training documents. The Form Recognizer sample labeling tool provides a UI to help you create these label files. Once you have them, you can call the StartTrainingAsync method with the uselabels parameter set to true.

private static async Task<Guid> TrainModelWithLabelsAsync(
    FormRecognizerClient trainingClient, string trainingDataUrl)
{
    CustomFormModel model = await trainingClient
    .StartTrainingAsync(new Uri(trainingDataUrl), useTrainingLabels: true)
    .WaitForCompletionAsync();
    Console.WriteLine($"Custom Model Info:");
    Console.WriteLine($"    Model Id: {model.ModelId}");
    Console.WriteLine($"    Model Status: {model.Status}");
    Console.WriteLine($"    Training model started on: {model.TrainingStartedOn}");
    Console.WriteLine($"    Training model completed on: {model.TrainingCompletedOn}");

The returned CustomFormModel indicates the fields the model can extract, along with its estimated accuracy in each field. The following code block prints this information to the console.

    foreach (CustomFormSubmodel submodel in model.Submodels)
    {
        Console.WriteLine($"Submodel Form Type: {submodel.FormType}");
        foreach (CustomFormModelField field in submodel.Fields.Values)
        {
            Console.Write($"    FieldName: {field.Name}");
            if (field.Label != null)
            {
                Console.Write($", FieldLabel: {field.Label}");
            }
            Console.WriteLine("");
        }
    }
    return model.ModelId;
}

Output

This response has been truncated for readability.

Form Page 1 has 18 lines.
    Line 0 has 1 word, and text: 'Contoso'.
    Line 1 has 1 word, and text: 'Address:'.
    Line 2 has 3 words, and text: 'Invoice For: Microsoft'.
    Line 3 has 4 words, and text: '1 Redmond way Suite'.
    Line 4 has 3 words, and text: '1020 Enterprise Way'.
    Line 5 has 3 words, and text: '6000 Redmond, WA'.
    ...
Table 0 has 2 rows and 6 columns.
    Cell (0, 0) contains text: 'Invoice Number'.
    Cell (0, 1) contains text: 'Invoice Date'.
    Cell (0, 2) contains text: 'Invoice Due Date'.
    ...
Merchant Name: 'Contoso Contoso', with confidence 0.516
Transaction Date: '6/10/2019 12:00:00 AM', with confidence 0.985
Item:
    Name: '8GB RAM (Black)', with confidence 0.916
    Total Price: '999', with confidence 0.559
Item:
    Name: 'SurfacePen', with confidence 0.858
    Total Price: '99.99', with confidence 0.386
Total: '1203.39', with confidence '0.774'
Custom Model Info:
    Model Id: 63c013e3-1cab-43eb-84b0-f4b20cb9214c
    Model Status: Ready
    Training model started on: 8/24/2020 6:42:54 PM +00:00
    Training model completed on: 8/24/2020 6:43:01 PM +00:00
Submodel Form Type: form-63c013e3-1cab-43eb-84b0-f4b20cb9214c
    FieldName: CompanyAddress
    FieldName: CompanyName
    FieldName: CompanyPhoneNumber
    FieldName: DatedAs
    FieldName: Email
    FieldName: Merchant
    ...

Analyze forms with a custom model

This section demonstrates how to extract key/value information and other content from your custom form types, using models you trained with your own forms.

Important

In order to implement this scenario, you must have already trained a model so you can pass its ID into the method below.

You'll use the StartRecognizeCustomFormsFromUri method.

// Analyze PDF form data
private static async Task AnalyzePdfForm(
    FormRecognizerClient recognizerClient, String modelId, string formUrl)
{
    RecognizedFormCollection forms = await recognizerClient
    .StartRecognizeCustomFormsFromUri(modelId, new Uri(formUrl))
    .WaitForCompletionAsync();

Tip

You can also analyze a local file. See the FormRecognizerClient methods, such as StartRecognizeCustomForms. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of RecognizedForm objects: one for each page in the submitted document. The following code prints the analysis results to the console. It prints each recognized field and corresponding value, along with a confidence score.

    foreach (RecognizedForm form in forms)
    {
        Console.WriteLine($"Form of type: {form.FormType}");
        foreach (FormField field in form.Fields.Values)
        {
            Console.WriteLine($"Field '{field.Name}: ");

            if (field.LabelData != null)
            {
                Console.WriteLine($"    Label: '{field.LabelData.Text}");
            }

            Console.WriteLine($"    Value: '{field.ValueData.Text}");
            Console.WriteLine($"    Confidence: '{field.Confidence}");
        }
        Console.WriteLine("Table data:");
        foreach (FormPage page in form.Pages)
        {
            for (int i = 0; i < page.Tables.Count; i++)
            {
                FormTable table = page.Tables[i];
                Console.WriteLine($"Table {i} has {table.RowCount} rows and {table.ColumnCount} columns.");
                foreach (FormTableCell cell in table.Cells)
                {
                    Console.WriteLine($"    Cell ({cell.RowIndex}, {cell.ColumnIndex}) contains {(cell.IsHeader ? "header" : "text")}: '{cell.Text}'");
                }
            }
        }
    }
}

Output

This response has been truncated for readability.

Custom Model Info:
    Model Id: 9b0108ee-65c8-450e-b527-bb309d054fc4
    Model Status: Ready
    Training model started on: 8/24/2020 7:00:31 PM +00:00
    Training model completed on: 8/24/2020 7:00:32 PM +00:00
Submodel Form Type: form-9b0108ee-65c8-450e-b527-bb309d054fc4
    FieldName: CompanyAddress
    FieldName: CompanyName
    FieldName: CompanyPhoneNumber
    ...
Form Page 1 has 18 lines.
    Line 0 has 1 word, and text: 'Contoso'.
    Line 1 has 1 word, and text: 'Address:'.
    Line 2 has 3 words, and text: 'Invoice For: Microsoft'.
    Line 3 has 4 words, and text: '1 Redmond way Suite'.
    ...

Table 0 has 2 rows and 6 columns.
    Cell (0, 0) contains text: 'Invoice Number'.
    Cell (0, 1) contains text: 'Invoice Date'.
    Cell (0, 2) contains text: 'Invoice Due Date'.
    ...
Merchant Name: 'Contoso Contoso', with confidence 0.516
Transaction Date: '6/10/2019 12:00:00 AM', with confidence 0.985
Item:
    Name: '8GB RAM (Black)', with confidence 0.916
    Total Price: '999', with confidence 0.559
Item:
    Name: 'SurfacePen', with confidence 0.858
    Total Price: '99.99', with confidence 0.386
Total: '1203.39', with confidence '0.774'
Custom Model Info:
    Model Id: dc115156-ce0e-4202-bbe4-7426e7bee756
    Model Status: Ready
    Training model started on: 8/24/2020 7:00:31 PM +00:00
    Training model completed on: 8/24/2020 7:00:41 PM +00:00
Submodel Form Type: form-0
    FieldName: field-0, FieldLabel: Additional Notes:
    FieldName: field-1, FieldLabel: Address:
    FieldName: field-2, FieldLabel: Company Name:
    FieldName: field-3, FieldLabel: Company Phone:
    FieldName: field-4, FieldLabel: Dated As:
    ...
Form of type: custom:form
Field 'Azure.AI.FormRecognizer.Models.FieldValue:
    Value: '$56,651.49
    Confidence: '0.249
Field 'Azure.AI.FormRecognizer.Models.FieldValue:
    Value: 'PT
    Confidence: '0.245
Field 'Azure.AI.FormRecognizer.Models.FieldValue:
    Value: '99243
    Confidence: '0.114
   ...

Manage custom models

This section demonstrates how to manage the custom models stored in your account. You'll do multiple operations within the following method:

private static async Task ManageModels(
    FormTrainingClient trainingClient, string trainingFileUrl)
{

Check the number of models in the FormRecognizer resource account

The following code block checks how many models you have saved in your Form Recognizer account and compares it to the account limit.

// Check number of models in the FormRecognizer account, 
// and the maximum number of models that can be stored.
AccountProperties accountProperties = trainingClient.GetAccountProperties();
Console.WriteLine($"Account has {accountProperties.CustomModelCount} models.");
Console.WriteLine($"It can have at most {accountProperties.CustomModelLimit} models.");

Output

Account has 20 models.
It can have at most 5000 models.

List the models currently stored in the resource account

The following code block lists the current models in your account and prints their details to the console.

Pageable<CustomFormModelInfo> models = trainingClient.GetCustomModels();

foreach (CustomFormModelInfo modelInfo in models)
{
    Console.WriteLine($"Custom Model Info:");
    Console.WriteLine($"    Model Id: {modelInfo.ModelId}");
    Console.WriteLine($"    Model Status: {modelInfo.Status}");
    Console.WriteLine($"    Training model started on: {modelInfo.TrainingStartedOn}");
    Console.WriteLine($"    Training model completed on: {modelInfo.TrainingCompletedOn}");
}

Output

This response has been truncated for readability.

Custom Model Info:
    Model Id: 05932d5a-a2f8-4030-a2ef-4e5ed7112515
    Model Status: Creating
    Training model started on: 8/24/2020 7:35:02 PM +00:00
    Training model completed on: 8/24/2020 7:35:02 PM +00:00
Custom Model Info:
    Model Id: 150828c4-2eb2-487e-a728-60d5d504bd16
    Model Status: Ready
    Training model started on: 8/24/2020 7:33:25 PM +00:00
    Training model completed on: 8/24/2020 7:33:27 PM +00:00
Custom Model Info:
    Model Id: 3303e9de-6cec-4dfb-9e68-36510a6ecbb2
    Model Status: Ready
    Training model started on: 8/24/2020 7:29:27 PM +00:00
    Training model completed on: 8/24/2020 7:29:36 PM +00:00

Get a specific model using the model's ID

The following code block trains a new model (just like in the Train a model section) and then retrieves a second reference to it using its ID.

// Create a new model to store in the account
CustomFormModel model = await trainingClient.StartTrainingAsync(
    new Uri(trainingFileUrl)).WaitForCompletionAsync();

// Get the model that was just created
CustomFormModel modelCopy = trainingClient.GetCustomModel(model.ModelId);

Console.WriteLine($"Custom Model {modelCopy.ModelId} recognizes the following form types:");

foreach (CustomFormSubmodel submodel in modelCopy.Submodels)
{
    Console.WriteLine($"Submodel Form Type: {submodel.FormType}");
    foreach (CustomFormModelField field in submodel.Fields.Values)
    {
        Console.Write($"    FieldName: {field.Name}");
        if (field.Label != null)
        {
            Console.Write($", FieldLabel: {field.Label}");
        }
        Console.WriteLine("");
    }
}

Output

This response has been truncated for readability.

Custom Model Info:
    Model Id: 150828c4-2eb2-487e-a728-60d5d504bd16
    Model Status: Ready
    Training model started on: 8/24/2020 7:33:25 PM +00:00
    Training model completed on: 8/24/2020 7:33:27 PM +00:00
Submodel Form Type: form-150828c4-2eb2-487e-a728-60d5d504bd16
    FieldName: CompanyAddress
    FieldName: CompanyName
    FieldName: CompanyPhoneNumber
    FieldName: DatedAs
    FieldName: Email
    FieldName: Merchant
    FieldName: PhoneNumber
    FieldName: PurchaseOrderNumber
    FieldName: Quantity
    FieldName: Signature
    FieldName: Subtotal
    FieldName: Tax
    FieldName: Total
    FieldName: VendorName
    FieldName: Website
...

Delete a model from the resource account

You can also delete a model from your account by referencing its ID. This step also closes out the method.

    // Delete the model from the account.
    trainingClient.DeleteModel(model.ModelId);
}

Run the application

Run the application from your application directory with the dotnet run command.

dotnet run

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Troubleshooting

When you interact with the Cognitive Services Form Recognizer client library using the .NET SDK, errors returned by the service will result in a RequestFailedException. They'll include the same HTTP status code that would have been returned by a REST API request.

For example, if you submit a receipt image with an invalid URI, a 400 error is returned, indicating "Bad Request".

try
{
    RecognizedReceiptCollection receipts = await client.StartRecognizeReceiptsFromUri(new Uri(receiptUri)).WaitForCompletionAsync();
}
catch (RequestFailedException e)
{
    Console.WriteLine(e.ToString());
}

You'll notice that additional information, like the client request ID of the operation, is logged.


Message:
    Azure.RequestFailedException: Service request failed.
    Status: 400 (Bad Request)

Content:
    {"error":{"code":"FailedToDownloadImage","innerError":
    {"requestId":"8ca04feb-86db-4552-857c-fde903251518"},
    "message":"Failed to download image from input URL."}}

Headers:
    Transfer-Encoding: chunked
    x-envoy-upstream-service-time: REDACTED
    apim-request-id: REDACTED
    Strict-Transport-Security: REDACTED
    X-Content-Type-Options: REDACTED
    Date: Mon, 20 Apr 2020 22:48:35 GMT
    Content-Type: application/json; charset=utf-8

Next steps

In this quickstart, you used the Form Recognizer .NET client library to train models and analyze forms in different ways. Next, learn tips to create a better training data set and produce more accurate models.

Important

  • This quickstart uses SDK version 3.1.1 and targets API version 2.1.

  • The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons.

Reference documentation | Library source code | Package (Maven) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The current version of the Java Development Kit (JDK)
  • The Gradle build tool, or another dependency manager.
  • Once you have your Azure subscription, create a Form Recognizer resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • An Azure Storage blob that contains a set of training data. See Build a training data set for a custom model for tips and options for putting together your training data set. For this quickstart, you can use the files under the Train folder of the sample data set (download and extract sample_data.zip).

Setting up

Create a new Gradle project

In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

mkdir myapp && cd myapp

Run the gradle init command from your working directory. This command will create essential build files for Gradle, including build.gradle.kts which is used at runtime to create and configure your application.

gradle init --type basic

When prompted to choose a DSL, select Kotlin.

Install the client library

This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the Maven Central Repository.

In your project's build.gradle.kts file, include the client library as an implementation statement, along with the required plugins and settings.

plugins {
    java
    application
}
application {
    mainClass.set("FormRecognizer")
}
repositories {
    mavenCentral()
}
dependencies {
    implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "3.1.1")
}

Create a Java file

From your working directory, run the following command:

mkdir -p src/main/java

Navigate to the new folder and create a file called FormRecognizer.java. Open it in your preferred editor or IDE and add the following import statements:

import com.azure.ai.formrecognizer.*;
import com.azure.ai.formrecognizer.training.*;
import com.azure.ai.formrecognizer.models.*;
import com.azure.ai.formrecognizer.training.models.*;

import java.util.concurrent.atomic.AtomicReference;
import java.util.List;
import java.util.Map;
import java.time.LocalDate;

import com.azure.core.credential.AzureKeyCredential;
import com.azure.core.http.rest.PagedIterable;
import com.azure.core.util.Context;
import com.azure.core.util.polling.SyncPoller;

Tip

If you want to view the entire file with the code samples in this quickstart, you can find it on GitHub.

In the application's FormRecognizer class, create variables for your resource's key and endpoint.

static final String key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
static final String endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";

Important

Go to the Azure portal. If the Form Recognizer resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See the Cognitive Services security article for more information.

In the application's main method, add calls for the methods used in this quickstart. You'll define these later. You'll also need to add references to the URLs for your training and testing data.

    • To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the Storage Explorer tab. Navigate to your container, right-click, and select Get shared access signature. It's important to get the SAS for your container, not for the storage account itself. Make sure the Read, Write, Delete and List permissions are checked, and click Create. Then copy the value in the URL section to a temporary location. It should have the form: https://<storage account>.blob.core.windows.net/<container name>?<SAS value>.

    SAS URL retrieval

  • To get a URL of a form to test, you can use the above steps to get the SAS URL of an individual document in blob storage. Or, take the URL of a document located elsewhere.

  • Use the above method to get the URL of a receipt image as well.

String trainingDataUrl = "PASTE_YOUR_SAS_URL_OF_YOUR_FORM_FOLDER_IN_BLOB_STORAGE_HERE";
String formUrl = "PASTE_YOUR_FORM_RECOGNIZER_FORM_URL_HERE";
String receiptUrl = "https://docs.microsoft.com/azure/cognitive-services/form-recognizer/media" + "/contoso-allinone.jpg";
String bcUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg";
String invoiceUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf";
String idUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/id-license.jpg"
// Call Form Recognizer scenarios:
System.out.println("Get form content...");
GetContent(recognizerClient, formUrl);

System.out.println("Analyze receipt...");
AnalyzeReceipt(recognizerClient, receiptUrl);

System.out.println("Analyze business card...");
AnalyzeBusinessCard(recognizerClient, bcUrl);

System.out.println("Analyze invoice...");
AnalyzeInvoice(recognizerClient, invoiceUrl);

System.out.println("Analyze id...");
AnalyzeId(recognizerClient, idUrl);

System.out.println("Train Model with training data...");
String modelId = TrainModel(trainingClient, trainingDataUrl);

System.out.println("Analyze PDF form...");
AnalyzePdfForm(recognizerClient, modelId, formUrl);

System.out.println("Manage models...");
ManageModels(trainingClient, trainingDataUrl);

Object model

With Form Recognizer, you can create two different client types. The first, FormRecognizerClient is used to query the service to recognized form fields and content. The second, FormTrainingClient is uses to create and manage custom models to improve recognition.

FormRecognizerClient

FormRecognizerClient provides operations for:

  • Recognizing form fields and content, using custom models trained to analyze your custom forms. These values are returned in a collection of RecognizedForm objects. See example Analyze custom forms.
  • Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of FormPage objects. See example Analyze layout.
  • Recognizing common fields from US receipts, business cards, invoices, and identity documents using a pre-trained model on the Form Recognizer service.

FormTrainingClient

FormTrainingClient provides operations for:

  • Training custom models to analyze all fields and values found in your custom forms. A CustomFormModel is returned indicating the form types the model will analyze, and the fields it will extract for each form type.
  • Training custom models to analyze specific fields and values you specify by labeling your custom forms. A CustomFormModel is returned indicating the fields the model will extract, as well as the estimated accuracy for each field.
  • Managing models created in your account.
  • Copying a custom model from one Form Recognizer resource to another.

Note

Models can also be trained using a graphical user interface such as the Form Recognizer Labeling Tool.

Authenticate the client

At the top of your main method, add the following code. Here, you'll authenticate two client objects using the subscription variables you defined above. You'll use an AzureKeyCredential object, so that if needed, you can update the API key without creating new client objects.

FormRecognizerClient recognizerClient = new FormRecognizerClientBuilder()
        .credential(new AzureKeyCredential(key)).endpoint(endpoint).buildClient();

FormTrainingClient trainingClient = new FormTrainingClientBuilder().credential(new AzureKeyCredential(key))
        .endpoint(endpoint).buildClient();

Analyze layout

You can use Form Recognizer to analyze tables, lines, and words in documents, without needing to train a model. For more information about layout extraction see the Layout conceptual guide.

To analyze the content of a file at a given URL, use the beginRecognizeContentFromUrl method.

private static void GetContent(FormRecognizerClient recognizerClient, String invoiceUri) {
    String analyzeFilePath = invoiceUri;
    SyncPoller<FormRecognizerOperationResult, List<FormPage>> recognizeContentPoller = recognizerClient
            .beginRecognizeContentFromUrl(analyzeFilePath);

    List<FormPage> contentResult = recognizeContentPoller.getFinalResult();

Tip

You can also get content from a local file. See the FormRecognizerClient methods, such as beginRecognizeContent. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of FormPage objects: one for each page in the submitted document. The following code iterates through these objects and prints the extracted key/value pairs and table data.

    contentResult.forEach(formPage -> {
        // Table information
        System.out.println("----Recognizing content ----");
        System.out.printf("Has width: %f and height: %f, measured with unit: %s.%n", formPage.getWidth(),
                formPage.getHeight(), formPage.getUnit());
        formPage.getTables().forEach(formTable -> {
            System.out.printf("Table has %d rows and %d columns.%n", formTable.getRowCount(),
                    formTable.getColumnCount());
            formTable.getCells().forEach(formTableCell -> {
                System.out.printf("Cell has text %s.%n", formTableCell.getText());
            });
            System.out.println();
        });
    });
}

Output

Get form content...
----Recognizing content ----
Has width: 8.500000 and height: 11.000000, measured with unit: inch.
Table has 2 rows and 6 columns.
Cell has text Invoice Number.
Cell has text Invoice Date.
Cell has text Invoice Due Date.
Cell has text Charges.
Cell has text VAT ID.
Cell has text 458176.
Cell has text 3/28/2018.
Cell has text 4/16/2018.
Cell has text $89,024.34.
Cell has text ET.

Analyze receipts

This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the Receipts conceptual guide.

To analyze receipts from a URI, use the beginRecognizeReceiptsFromUrl method.

private static void AnalyzeReceipt(FormRecognizerClient recognizerClient, String receiptUri) {
    SyncPoller<FormRecognizerOperationResult, List<RecognizedForm>> syncPoller = recognizerClient
            .beginRecognizeReceiptsFromUrl(receiptUri);
    List<RecognizedForm> receiptPageResults = syncPoller.getFinalResult();

Tip

You can also analyze local receipt images. See the FormRecognizerClient methods, such as beginRecognizeReceipts. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of RecognizedReceipt objects: one for each page in the submitted document. The next block of code iterates through the receipts and prints their details to the console.

for (int i = 0; i < receiptPageResults.size(); i++) {
    RecognizedForm recognizedForm = receiptPageResults.get(i);
    Map<String, FormField> recognizedFields = recognizedForm.getFields();
    System.out.printf("----------- Recognized Receipt page %d -----------%n", i);
    FormField merchantNameField = recognizedFields.get("MerchantName");
    if (merchantNameField != null) {
        if (FieldValueType.STRING == merchantNameField.getValue().getValueType()) {
            String merchantName = merchantNameField.getValue().asString();
            System.out.printf("Merchant Name: %s, confidence: %.2f%n", merchantName,
                    merchantNameField.getConfidence());
        }
    }
    FormField merchantAddressField = recognizedFields.get("MerchantAddress");
    if (merchantAddressField != null) {
        if (FieldValueType.STRING == merchantAddressField.getValue().getValueType()) {
            String merchantAddress = merchantAddressField.getValue().asString();
            System.out.printf("Merchant Address: %s, confidence: %.2f%n", merchantAddress,
                    merchantAddressField.getConfidence());
        }
    }
    FormField transactionDateField = recognizedFields.get("TransactionDate");
    if (transactionDateField != null) {
        if (FieldValueType.DATE == transactionDateField.getValue().getValueType()) {
            LocalDate transactionDate = transactionDateField.getValue().asDate();
            System.out.printf("Transaction Date: %s, confidence: %.2f%n", transactionDate,
                    transactionDateField.getConfidence());
        }
    }

The next block of code iterates through the individual items detected on the receipt and prints their details to the console.

        FormField receiptItemsField = recognizedFields.get("Items");
        if (receiptItemsField != null) {
            System.out.printf("Receipt Items: %n");
            if (FieldValueType.LIST == receiptItemsField.getValue().getValueType()) {
                List<FormField> receiptItems = receiptItemsField.getValue().asList();
                receiptItems.stream()
                        .filter(receiptItem -> FieldValueType.MAP == receiptItem.getValue().getValueType())
                        .map(formField -> formField.getValue().asMap())
                        .forEach(formFieldMap -> formFieldMap.forEach((key, formField) -> {
                            if ("Name".equals(key)) {
                                if (FieldValueType.STRING == formField.getValue().getValueType()) {
                                    String name = formField.getValue().asString();
                                    System.out.printf("Name: %s, confidence: %.2fs%n", name,
                                            formField.getConfidence());
                                }
                            }
                            if ("Quantity".equals(key)) {
                                if (FieldValueType.FLOAT == formField.getValue().getValueType()) {
                                    Float quantity = formField.getValue().asFloat();
                                    System.out.printf("Quantity: %f, confidence: %.2f%n", quantity,
                                            formField.getConfidence());
                                }
                            }
                            if ("Price".equals(key)) {
                                if (FieldValueType.FLOAT == formField.getValue().getValueType()) {
                                    Float price = formField.getValue().asFloat();
                                    System.out.printf("Price: %f, confidence: %.2f%n", price,
                                            formField.getConfidence());
                                }
                            }
                            if ("TotalPrice".equals(key)) {
                                if (FieldValueType.FLOAT == formField.getValue().getValueType()) {
                                    Float totalPrice = formField.getValue().asFloat();
                                    System.out.printf("Total Price: %f, confidence: %.2f%n", totalPrice,
                                            formField.getConfidence());
                                }
                            }
                        }));
            }
        }
    }
}

Output

Analyze receipt...
----------- Recognized Receipt page 0 -----------
Merchant Name: Contoso Contoso, confidence: 0.62
Merchant Address: 123 Main Street Redmond, WA 98052, confidence: 0.99
Transaction Date: 2020-06-10, confidence: 0.90
Receipt Items:
Name: Cappuccino, confidence: 0.96s
Quantity: null, confidence: 0.957s]
Total Price: 2.200000, confidence: 0.95
Name: BACON & EGGS, confidence: 0.94s
Quantity: null, confidence: 0.927s]
Total Price: null, confidence: 0.93

Analyze business cards

This section demonstrates how to analyze and extract common fields from English business cards, using a pre-trained model. For more information about business card analysis, see the Business cards conceptual guide.

To analyze business cards from a URL, use the beginRecognizeBusinessCardsFromUrl method.

private static void AnalyzeBusinessCard(FormRecognizerClient recognizerClient, String bcUrl) {
    SyncPoller < FormRecognizerOperationResult,
    List < RecognizedForm >> recognizeBusinessCardPoller = client.beginRecognizeBusinessCardsFromUrl(businessCardUrl);

    List < RecognizedForm > businessCardPageResults = recognizeBusinessCardPoller.getFinalResult();

Tip

You can also analyze local business card images. See the FormRecognizerClient methods, such as beginRecognizeBusinessCards. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of RecognizedForm objects: one for each card in the document. The following code processes the business card at the given URI and prints the major fields and values to the console.

    for (int i = 0; i < businessCardPageResults.size(); i++) {
        RecognizedForm recognizedForm = businessCardPageResults.get(i);
        Map < String,
        FormField > recognizedFields = recognizedForm.getFields();
        System.out.printf("----------- Recognized business card info for page %d -----------%n", i);
        FormField contactNamesFormField = recognizedFields.get("ContactNames");
        if (contactNamesFormField != null) {
            if (FieldValueType.LIST == contactNamesFormField.getValue().getValueType()) {
                List < FormField > contactNamesList = contactNamesFormField.getValue().asList();
                contactNamesList.stream().filter(contactName - >FieldValueType.MAP == contactName.getValue().getValueType()).map(contactName - >{
                    System.out.printf("Contact name: %s%n", contactName.getValueData().getText());
                    return contactName.getValue().asMap();
                }).forEach(contactNamesMap - >contactNamesMap.forEach((key, contactName) - >{
                    if ("FirstName".equals(key)) {
                        if (FieldValueType.STRING == contactName.getValue().getValueType()) {
                            String firstName = contactName.getValue().asString();
                            System.out.printf("\tFirst Name: %s, confidence: %.2f%n", firstName, contactName.getConfidence());
                        }
                    }
                    if ("LastName".equals(key)) {
                        if (FieldValueType.STRING == contactName.getValue().getValueType()) {
                            String lastName = contactName.getValue().asString();
                            System.out.printf("\tLast Name: %s, confidence: %.2f%n", lastName, contactName.getConfidence());
                        }
                    }
                }));
            }
        }

        FormField jobTitles = recognizedFields.get("JobTitles");
        if (jobTitles != null) {
            if (FieldValueType.LIST == jobTitles.getValue().getValueType()) {
                List < FormField > jobTitlesItems = jobTitles.getValue().asList();
                jobTitlesItems.stream().forEach(jobTitlesItem - >{
                    if (FieldValueType.STRING == jobTitlesItem.getValue().getValueType()) {
                        String jobTitle = jobTitlesItem.getValue().asString();
                        System.out.printf("Job Title: %s, confidence: %.2f%n", jobTitle, jobTitlesItem.getConfidence());
                    }
                });
            }
        }

        FormField departments = recognizedFields.get("Departments");
        if (departments != null) {
            if (FieldValueType.LIST == departments.getValue().getValueType()) {
                List < FormField > departmentsItems = departments.getValue().asList();
                departmentsItems.stream().forEach(departmentsItem - >{
                    if (FieldValueType.STRING == departmentsItem.getValue().getValueType()) {
                        String department = departmentsItem.getValue().asString();
                        System.out.printf("Department: %s, confidence: %.2f%n", department, departmentsItem.getConfidence());
                    }
                });
            }
        }

        FormField emails = recognizedFields.get("Emails");
        if (emails != null) {
            if (FieldValueType.LIST == emails.getValue().getValueType()) {
                List < FormField > emailsItems = emails.getValue().asList();
                emailsItems.stream().forEach(emailsItem - >{
                    if (FieldValueType.STRING == emailsItem.getValue().getValueType()) {
                        String email = emailsItem.getValue().asString();
                        System.out.printf("Email: %s, confidence: %.2f%n", email, emailsItem.getConfidence());
                    }
                });
            }
        }

        FormField websites = recognizedFields.get("Websites");
        if (websites != null) {
            if (FieldValueType.LIST == websites.getValue().getValueType()) {
                List < FormField > websitesItems = websites.getValue().asList();
                websitesItems.stream().forEach(websitesItem - >{
                    if (FieldValueType.STRING == websitesItem.getValue().getValueType()) {
                        String website = websitesItem.getValue().asString();
                        System.out.printf("Web site: %s, confidence: %.2f%n", website, websitesItem.getConfidence());
                    }
                });
            }
        }

        FormField mobilePhones = recognizedFields.get("MobilePhones");
        if (mobilePhones != null) {
            if (FieldValueType.LIST == mobilePhones.getValue().getValueType()) {
                List < FormField > mobilePhonesItems = mobilePhones.getValue().asList();
                mobilePhonesItems.stream().forEach(mobilePhonesItem - >{
                    if (FieldValueType.PHONE_NUMBER == mobilePhonesItem.getValue().getValueType()) {
                        String mobilePhoneNumber = mobilePhonesItem.getValue().asPhoneNumber();
                        System.out.printf("Mobile phone number: %s, confidence: %.2f%n", mobilePhoneNumber, mobilePhonesItem.getConfidence());
                    }
                });
            }
        }

        FormField otherPhones = recognizedFields.get("OtherPhones");
        if (otherPhones != null) {
            if (FieldValueType.LIST == otherPhones.getValue().getValueType()) {
                List < FormField > otherPhonesItems = otherPhones.getValue().asList();
                otherPhonesItems.stream().forEach(otherPhonesItem - >{
                    if (FieldValueType.PHONE_NUMBER == otherPhonesItem.getValue().getValueType()) {
                        String otherPhoneNumber = otherPhonesItem.getValue().asPhoneNumber();
                        System.out.printf("Other phone number: %s, confidence: %.2f%n", otherPhoneNumber, otherPhonesItem.getConfidence());
                    }
                });
            }
        }

        FormField faxes = recognizedFields.get("Faxes");
        if (faxes != null) {
            if (FieldValueType.LIST == faxes.getValue().getValueType()) {
                List < FormField > faxesItems = faxes.getValue().asList();
                faxesItems.stream().forEach(faxesItem - >{
                    if (FieldValueType.PHONE_NUMBER == faxesItem.getValue().getValueType()) {
                        String faxPhoneNumber = faxesItem.getValue().asPhoneNumber();
                        System.out.printf("Fax phone number: %s, confidence: %.2f%n", faxPhoneNumber, faxesItem.getConfidence());
                    }
                });
            }
        }

        FormField addresses = recognizedFields.get("Addresses");
        if (addresses != null) {
            if (FieldValueType.LIST == addresses.getValue().getValueType()) {
                List < FormField > addressesItems = addresses.getValue().asList();
                addressesItems.stream().forEach(addressesItem - >{
                    if (FieldValueType.STRING == addressesItem.getValue().getValueType()) {
                        String address = addressesItem.getValue().asString();
                        System.out.printf("Address: %s, confidence: %.2f%n", address, addressesItem.getConfidence());
                    }
                });
            }
        }

        FormField companyName = recognizedFields.get("CompanyNames");
        if (companyName != null) {
            if (FieldValueType.LIST == companyName.getValue().getValueType()) {
                List < FormField > companyNameItems = companyName.getValue().asList();
                companyNameItems.stream().forEach(companyNameItem - >{
                    if (FieldValueType.STRING == companyNameItem.getValue().getValueType()) {
                        String companyNameValue = companyNameItem.getValue().asString();
                        System.out.printf("Company name: %s, confidence: %.2f%n", companyNameValue, companyNameItem.getConfidence());
                    }
                });
            }
        }
    }
}

Analyze invoices

This section demonstrates how to analyze and extract common fields from sales invoices, using a pre-trained model. For more information about invoice analysis, see the Invoice conceptual guide.

To analyze invoices from a URL, use the beginRecognizeInvoicesFromUrl method.

private static void AnalyzeInvoice(FormRecognizerClient recognizerClient, String invoiceUrl) {
    SyncPoller < FormRecognizerOperationResult,
    List < RecognizedForm >> recognizeInvoicesPoller = client.beginRecognizeInvoicesFromUrl(invoiceUrl);

    List < RecognizedForm > recognizedInvoices = recognizeInvoicesPoller.getFinalResult();

Tip

You can also analyze local invoices. See the FormRecognizerClient methods, such as beginRecognizeInvoices. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of RecognizedForm objects: one for each invoice in the document. The following code processes the invoice at the given URI and prints the major fields and values to the console.

    for (int i = 0; i < recognizedInvoices.size(); i++) {
        RecognizedForm recognizedInvoice = recognizedInvoices.get(i);
        Map < String,
        FormField > recognizedFields = recognizedInvoice.getFields();
        System.out.printf("----------- Recognized invoice info for page %d -----------%n", i);
        FormField vendorNameField = recognizedFields.get("VendorName");
        if (vendorNameField != null) {
            if (FieldValueType.STRING == vendorNameField.getValue().getValueType()) {
                String merchantName = vendorNameField.getValue().asString();
                System.out.printf("Vendor Name: %s, confidence: %.2f%n", merchantName, vendorNameField.getConfidence());
            }
        }

        FormField vendorAddressField = recognizedFields.get("VendorAddress");
        if (vendorAddressField != null) {
            if (FieldValueType.STRING == vendorAddressField.getValue().getValueType()) {
                String merchantAddress = vendorAddressField.getValue().asString();
                System.out.printf("Vendor address: %s, confidence: %.2f%n", merchantAddress, vendorAddressField.getConfidence());
            }
        }

        FormField customerNameField = recognizedFields.get("CustomerName");
        if (customerNameField != null) {
            if (FieldValueType.STRING == customerNameField.getValue().getValueType()) {
                String merchantAddress = customerNameField.getValue().asString();
                System.out.printf("Customer Name: %s, confidence: %.2f%n", merchantAddress, customerNameField.getConfidence());
            }
        }

        FormField customerAddressRecipientField = recognizedFields.get("CustomerAddressRecipient");
        if (customerAddressRecipientField != null) {
            if (FieldValueType.STRING == customerAddressRecipientField.getValue().getValueType()) {
                String customerAddr = customerAddressRecipientField.getValue().asString();
                System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n", customerAddr, customerAddressRecipientField.getConfidence());
            }
        }

        FormField invoiceIdField = recognizedFields.get("InvoiceId");
        if (invoiceIdField != null) {
            if (FieldValueType.STRING == invoiceIdField.getValue().getValueType()) {
                String invoiceId = invoiceIdField.getValue().asString();
                System.out.printf("Invoice Id: %s, confidence: %.2f%n", invoiceId, invoiceIdField.getConfidence());
            }
        }

        FormField invoiceDateField = recognizedFields.get("InvoiceDate");
        if (customerNameField != null) {
            if (FieldValueType.DATE == invoiceDateField.getValue().getValueType()) {
                LocalDate invoiceDate = invoiceDateField.getValue().asDate();
                System.out.printf("Invoice Date: %s, confidence: %.2f%n", invoiceDate, invoiceDateField.getConfidence());
            }
        }

        FormField invoiceTotalField = recognizedFields.get("InvoiceTotal");
        if (customerAddressRecipientField != null) {
            if (FieldValueType.FLOAT == invoiceTotalField.getValue().getValueType()) {
                Float invoiceTotal = invoiceTotalField.getValue().asFloat();
                System.out.printf("Invoice Total: %.2f, confidence: %.2f%n", invoiceTotal, invoiceTotalField.getConfidence());
            }
        }
    }
}

Analyze identity documents

This section demonstrates how to analyze and extract key information from government-issued identification documents—worldwide passports and U.S. driver's licenses—using the Form Recognizer prebuilt ID model. For more information about identity document analysis, see our prebuilt identification model conceptual guide.

To analyze identity documents from a URI use the beginRecognizeIdentityDocumentsFromUrl method.

private static void AnalyzeId(FormRecognizerClient client, String idUrl) {
    SyncPoller < FormRecognizerOperationResult,
    List < RecognizedForm >> analyzeIdentityDocumentPoller = client.beginRecognizeIdentityDocumentsFromUrl(licenseDocumentUrl);

    List < RecognizedForm > identityDocumentResults = analyzeIdentityDocumentPoller.getFinalResult();

Tip

You can also analyze local identity document images. See the FormRecognizerClient methods, such as beginRecognizeIdentityDocuments. Also, see the sample code on GitHub for scenarios involving local images.

The following code processes the identity document at the given URI and prints the major fields and values to the console.

for (int i = 0; i < identityDocumentResults.size(); i++) {
    RecognizedForm recognizedForm = identityDocumentResults.get(i);
    Map < String,
    FormField > recognizedFields = recognizedForm.getFields();
    System.out.printf("----------- Recognized license info for page %d -----------%n", i);
    FormField addressField = recognizedFields.get("Address");
    if (addressField != null) {
        if (FieldValueType.STRING == addressField.getValue().getValueType()) {
            String address = addressField.getValue().asString();
            System.out.printf("Address: %s, confidence: %.2f%n", address, addressField.getConfidence());
        }
    }

    FormField countryRegionFormField = recognizedFields.get("CountryRegion");
    if (countryRegionFormField != null) {
        if (FieldValueType.STRING == countryRegionFormField.getValue().getValueType()) {
            String countryRegion = countryRegionFormField.getValue().asCountryRegion();
            System.out.printf("Country or region: %s, confidence: %.2f%n", countryRegion, countryRegionFormField.getConfidence());
        }
    }

    FormField dateOfBirthField = recognizedFields.get("DateOfBirth");
    if (dateOfBirthField != null) {
        if (FieldValueType.DATE == dateOfBirthField.getValue().getValueType()) {
            LocalDate dateOfBirth = dateOfBirthField.getValue().asDate();
            System.out.printf("Date of Birth: %s, confidence: %.2f%n", dateOfBirth, dateOfBirthField.getConfidence());
        }
    }

    FormField dateOfExpirationField = recognizedFields.get("DateOfExpiration");
    if (dateOfExpirationField != null) {
        if (FieldValueType.DATE == dateOfExpirationField.getValue().getValueType()) {
            LocalDate expirationDate = dateOfExpirationField.getValue().asDate();
            System.out.printf("Document date of expiration: %s, confidence: %.2f%n", expirationDate, dateOfExpirationField.getConfidence());
        }
    }

    FormField documentNumberField = recognizedFields.get("DocumentNumber");
    if (documentNumberField != null) {
        if (FieldValueType.STRING == documentNumberField.getValue().getValueType()) {
            String documentNumber = documentNumberField.getValue().asString();
            System.out.printf("Document number: %s, confidence: %.2f%n", documentNumber, documentNumberField.getConfidence());
        }
    }

    FormField firstNameField = recognizedFields.get("FirstName");
    if (firstNameField != null) {
        if (FieldValueType.STRING == firstNameField.getValue().getValueType()) {
            String firstName = firstNameField.getValue().asString();
            System.out.printf("First Name: %s, confidence: %.2f%n", firstName, documentNumberField.getConfidence());
        }
    }

    FormField lastNameField = recognizedFields.get("LastName");
    if (lastNameField != null) {
        if (FieldValueType.STRING == lastNameField.getValue().getValueType()) {
            String lastName = lastNameField.getValue().asString();
            System.out.printf("Last name: %s, confidence: %.2f%n", lastName, lastNameField.getConfidence());
        }
    }

    FormField regionField = recognizedFields.get("Region");
    if (regionField != null) {
        if (FieldValueType.STRING == regionField.getValue().getValueType()) {
            String region = regionField.getValue().asString();
            System.out.printf("Region: %s, confidence: %.2f%n", region, regionField.getConfidence());
        }
    }
}

Train a custom model

This section demonstrates how to train a model with your own data. A trained model can output structured data that includes the key/value relationships in the original form document. After you train the model, you can test and retrain it and eventually use it to reliably extract data from more forms according to your needs.

Note

You can also train models with a graphical user interface such as the Form Recognizer sample labeling tool.

Train a model without labels

Train custom models to analyze all fields and values found in your custom forms without manually labeling the training documents.

The following method trains a model on a given set of documents and prints the model's status to the console.

private static String TrainModel(FormTrainingClient trainingClient, String trainingDataUrl) {
    SyncPoller<FormRecognizerOperationResult, CustomFormModel> trainingPoller = trainingClient
            .beginTraining(trainingDataUrl, false);

    CustomFormModel customFormModel = trainingPoller.getFinalResult();

    // Model Info
    System.out.printf("Model Id: %s%n", customFormModel.getModelId());
    System.out.printf("Model Status: %s%n", customFormModel.getModelStatus());
    System.out.printf("Training started on: %s%n", customFormModel.getTrainingStartedOn());
    System.out.printf("Training completed on: %s%n%n", customFormModel.getTrainingCompletedOn());

The returned CustomFormModel object contains information on the form types the model can analyze and the fields it can extract from each form type. The following code block prints this information to the console.

System.out.println("Recognized Fields:");
// looping through the subModels, which contains the fields they were trained on
// Since the given training documents are unlabeled, we still group them but
// they do not have a label.
customFormModel.getSubmodels().forEach(customFormSubmodel -> {
    // Since the training data is unlabeled, we are unable to return the accuracy of
    // this model
    System.out.printf("The subModel has form type %s%n", customFormSubmodel.getFormType());
    customFormSubmodel.getFields().forEach((field, customFormModelField) -> System.out
            .printf("The model found field '%s' with label: %s%n", field, customFormModelField.getLabel()));
});

Finally, this method returns the unique ID of the model.

    return customFormModel.getModelId();
}

Output

Train Model with training data...
Model Id: 20c3544d-97b4-49d9-b39b-dc32d85f1358
Model Status: ready
Training started on: 2020-08-31T16:52:09Z
Training completed on: 2020-08-31T16:52:23Z

Recognized Fields:
The subModel has form type form-0
The model found field 'field-0' with label: Address:
The model found field 'field-1' with label: Charges
The model found field 'field-2' with label: Invoice Date
The model found field 'field-3' with label: Invoice Due Date
The model found field 'field-4' with label: Invoice For:
The model found field 'field-5' with label: Invoice Number
The model found field 'field-6' with label: VAT ID

Train a model with labels

You can also train custom models by manually labeling the training documents. Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (<filename>.pdf.labels.json) in your blob storage container alongside the training documents. The Form Recognizer sample labeling tool provides a UI to help you create these label files. Once you have them, you can call the beginTraining method with the useTrainingLabels parameter set to true.

private static String TrainModelWithLabels(FormTrainingClient trainingClient, String trainingDataUrl) {
    // Train custom model
    String trainingSetSource = trainingDataUrl;
    SyncPoller<FormRecognizerOperationResult, CustomFormModel> trainingPoller = trainingClient
            .beginTraining(trainingSetSource, true);

    CustomFormModel customFormModel = trainingPoller.getFinalResult();

    // Model Info
    System.out.printf("Model Id: %s%n", customFormModel.getModelId());
    System.out.printf("Model Status: %s%n", customFormModel.getModelStatus());
    System.out.printf("Training started on: %s%n", customFormModel.getTrainingStartedOn());
    System.out.printf("Training completed on: %s%n%n", customFormModel.getTrainingCompletedOn());

The returned CustomFormModel indicates the fields the model can extract, along with its estimated accuracy in each field. The following code block prints this information to the console.

    // looping through the subModels, which contains the fields they were trained on
    // The labels are based on the ones you gave the training document.
    System.out.println("Recognized Fields:");
    // Since the data is labeled, we are able to return the accuracy of the model
    customFormModel.getSubmodels().forEach(customFormSubmodel -> {
        System.out.printf("The subModel with form type %s has accuracy: %.2f%n", customFormSubmodel.getFormType(),
                customFormSubmodel.getAccuracy());
        customFormSubmodel.getFields()
                .forEach((label, customFormModelField) -> System.out.printf(
                        "The model found field '%s' to have name: %s with an accuracy: %.2f%n", label,
                        customFormModelField.getName(), customFormModelField.getAccuracy()));
    });
    return customFormModel.getModelId();
}

Output

Train Model with training data...
Model Id: 20c3544d-97b4-49d9-b39b-dc32d85f1358
Model Status: ready
Training started on: 2020-08-31T16:52:09Z
Training completed on: 2020-08-31T16:52:23Z

Recognized Fields:
The subModel has form type form-0
The model found field 'field-0' with label: Address:
The model found field 'field-1' with label: Charges
The model found field 'field-2' with label: Invoice Date
The model found field 'field-3' with label: Invoice Due Date
The model found field 'field-4' with label: Invoice For:
The model found field 'field-5' with label: Invoice Number
The model found field 'field-6' with label: VAT ID

Analyze forms with a custom model

This section demonstrates how to extract key/value information and other content from your custom form types, using models you trained with your own forms.

Important

In order to implement this scenario, you must have already trained a model so you can pass its ID into the method below. See the Train a model section.

You'll use the beginRecognizeCustomFormsFromUrl method.

// Analyze PDF form data
private static void AnalyzePdfForm(FormRecognizerClient formClient, String modelId, String pdfFormUrl) {
    SyncPoller<FormRecognizerOperationResult, List<RecognizedForm>> recognizeFormPoller = formClient
            .beginRecognizeCustomFormsFromUrl(modelId, pdfFormUrl);

    List<RecognizedForm> recognizedForms = recognizeFormPoller.getFinalResult();

Tip

You can also analyze a local file. See the FormRecognizerClient methods, such as beginRecognizeCustomForms. Or, see the sample code on GitHub for scenarios involving local images.

The returned value is a collection of RecognizedForm objects: one for each page in the submitted document.The following code prints the analysis results to the console. It prints each recognized field and corresponding value, along with a confidence score.

    for (int i = 0; i < recognizedForms.size(); i++) {
        final RecognizedForm form = recognizedForms.get(i);
        System.out.printf("----------- Recognized custom form info for page %d -----------%n", i);
        System.out.printf("Form type: %s%n", form.getFormType());
        form.getFields().forEach((label, formField) ->
        // label data is populated if you are using a model trained with unlabeled data,
        // since the service needs to make predictions for labels if not explicitly
        // given to it.
        System.out.printf("Field '%s' has label '%s' with a confidence " + "score of %.2f.%n", label,
                formField.getLabelData().getText(), formField.getConfidence()));
    }
}

Output

Analyze PDF form...
----------- Recognized custom form info for page 0 -----------
Form type: form-0
Field 'field-0' has label 'Address:' with a confidence score of 0.91.
Field 'field-1' has label 'Invoice For:' with a confidence score of 1.00.
Field 'field-2' has label 'Invoice Number' with a confidence score of 1.00.
Field 'field-3' has label 'Invoice Date' with a confidence score of 1.00.
Field 'field-4' has label 'Invoice Due Date' with a confidence score of 1.00.
Field 'field-5' has label 'Charges' with a confidence score of 1.00.
Field 'field-6' has label 'VAT ID' with a confidence score of 1.00.

Manage custom models

This section demonstrates how to manage the custom models stored in your account. The following code does all of the model management tasks in a single method, as an example. Start by copying the method signature below:

private static void ManageModels(FormTrainingClient trainingClient, String trainingFileUrl) {

Check the number of models in the FormRecognizer resource account

The following code block checks how many models you have saved in your Form Recognizer account and compares it to the account limit.

AtomicReference<String> modelId = new AtomicReference<>();

// First, we see how many custom models we have, and what our limit is
AccountProperties accountProperties = trainingClient.getAccountProperties();
System.out.printf("The account has %s custom models, and we can have at most %s custom models",
        accountProperties.getCustomModelCount(), accountProperties.getCustomModelLimit());

Output

The account has 12 custom models, and we can have at most 250 custom models

List the models currently stored in the resource account

The following code block lists the current models in your account and prints their details to the console.

// Next, we get a paged list of all of our custom models
PagedIterable<CustomFormModelInfo> customModels = trainingClient.listCustomModels();
System.out.println("We have following models in the account:");
customModels.forEach(customFormModelInfo -> {
    System.out.printf("Model Id: %s%n", customFormModelInfo.getModelId());
    // get custom model info
    modelId.set(customFormModelInfo.getModelId());
    CustomFormModel customModel = trainingClient.getCustomModel(customFormModelInfo.getModelId());
    System.out.printf("Model Id: %s%n", customModel.getModelId());
    System.out.printf("Model Status: %s%n", customModel.getModelStatus());
    System.out.printf("Training started on: %s%n", customModel.getTrainingStartedOn());
    System.out.printf("Training completed on: %s%n", customModel.getTrainingCompletedOn());
    customModel.getSubmodels().forEach(customFormSubmodel -> {
        System.out.printf("Custom Model Form type: %s%n", customFormSubmodel.getFormType());
        System.out.printf("Custom Model Accuracy: %.2f%n", customFormSubmodel.getAccuracy());
        if (customFormSubmodel.getFields() != null) {
            customFormSubmodel.getFields().forEach((fieldText, customFormModelField) -> {
                System.out.printf("Field Text: %s%n", fieldText);
                System.out.printf("Field Accuracy: %.2f%n", customFormModelField.getAccuracy());
            });
        }
    });
});

Output

This response has been truncated for readability.

We have following models in the account:
Model Id: 0b048b60-86cc-47ec-9782-ad0ffaf7a5ce
Model Id: 0b048b60-86cc-47ec-9782-ad0ffaf7a5ce
Model Status: ready
Training started on: 2020-06-04T18:33:08Z
Training completed on: 2020-06-04T18:33:10Z
Custom Model Form type: form-0b048b60-86cc-47ec-9782-ad0ffaf7a5ce
Custom Model Accuracy: 1.00
Field Text: invoice date
Field Accuracy: 1.00
Field Text: invoice number
Field Accuracy: 1.00
...

Delete a model from the resource account

You can also delete a model from your account by referencing its ID.

    // Delete Custom Model
    System.out.printf("Deleted model with model Id: %s, operation completed with status: %s%n", modelId.get(),
            trainingClient.deleteModelWithResponse(modelId.get(), Context.NONE).getStatusCode());
}

Run the application

Navigate back to your main project directory. Then, build the app with the following command:

gradle build

Run the application with the run goal:

gradle run

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Troubleshooting

From Recognizer clients raise ErrorResponseException exceptions. For example, if you try to provide an invalid file source URL an ErrorResponseException would be raised with an error indicating the failure cause. In the following code snippet, the error is handled gracefully by catching the exception and display the additional information about the error.

try {
    formRecognizerClient.beginRecognizeContentFromUrl("invalidSourceUrl");
} catch (ErrorResponseException e) {
    System.out.println(e.getMessage());
}

Enable client logging

Azure SDKs for Java offer a consistent logging story to help aid in troubleshooting application errors and speeding up their resolution. The logs produced will capture the flow of an application before reaching the terminal state to help locate the root issue. View the logging wiki for guidance about enabling logging.

Next steps

In this quickstart, you used the Form Recognizer Java client library to train models and analyze forms in different ways. Next, learn tips to create a better training data set and produce more accurate models.

Important

  • This quickstart uses SDK version 3.1.1 and targets API version 2.1.

  • The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. See the reference documentation below.

Reference documentation | Library source code | Package (npm) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The current version of Node.js
  • An Azure Storage blob that contains a set of training data. See Build a training data set for a custom model for tips and options for putting together your training data set. For this quickstart, you can use the files under the Train folder of the sample data set (download and extract sample_data.zip).
  • Once you have your Azure subscription, create a Form Recognizer resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Create a new Node.js application

In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

mkdir myapp && cd myapp

Run the npm init command to create a node application with a package.json file.

npm init

Install the client library

Install the ai-form-recognizer NPM package:

npm install @azure/ai-form-recognizer

Your app's package.json file will be updated with the dependencies.

Create a file named index.js, open it, and import the following libraries:

const { FormRecognizerClient, FormTrainingClient, AzureKeyCredential } = require("@azure/ai-form-recognizer");
const fs = require("fs");

Create variables for your resource's Azure endpoint and key.

const apiKey = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
const endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";

Important

Go to the Azure portal. If the Form Recognizer resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See the Cognitive Services security article for more information.

Object model

With Form Recognizer, you can create two different client types. The first, FormRecognizerClient is used to query the service to recognized form fields and content. The second, FormTrainingClient is used to create and manage custom models to improve recognition.

FormRecognizerClient

FormRecognizerClient provides operations for:

  • Recognizing form fields and content using custom models trained to analyze your custom forms. These values are returned in a collection of RecognizedForm objects.
  • Recognizing form content, including tables, lines and words, without the need to train a model. Form content is returned in a collection of FormPage objects.
  • Recognizing common fields from US receipts, business cards, invoices, and identity documents using a pre-trained model on the Form Recognizer service.

FormTrainingClient

FormTrainingClient provides operations for:

  • Training custom models to analyze all fields and values found in your custom forms. A CustomFormModel is returned indicating the form types the model will analyze, and the fields it will extract for each form type. See the service's documentation on unlabeled model training for more details.
  • Training custom models to analyze specific fields and values you specify by labeling your custom forms. A CustomFormModel is returned indicating the fields the model will extract, as well as the estimated accuracy for each field. See the service's documentation on labeled model training for a more detailed explanation of applying labels to a training data set.
  • Managing models created in your account.
  • Copying a custom model from one Form Recognizer resource to another.

Note

Models can also be trained using a graphical user interface such as the Form Recognizer Labeling Tool.

Authenticate the client

Authenticate a client object using the subscription variables you defined. You'll use an AzureKeyCredential object, so that if needed, you can update the API key without creating new client objects. You'll also create a training client object.

const trainingClient = new FormTrainingClient(endpoint, new AzureKeyCredential(apiKey));
const client = new FormRecognizerClient(endpoint, new AzureKeyCredential(apiKey));

Get assets for testing

You'll also need to add references to the URLs for your training and testing data.

    • To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the Storage Explorer tab. Navigate to your container, right-click, and select Get shared access signature. It's important to get the SAS for your container, not for the storage account itself. Make sure the Read, Write, Delete and List permissions are checked, and click Create. Then copy the value in the URL section to a temporary location. It should have the form: https://<storage account>.blob.core.windows.net/<container name>?<SAS value>.

    SAS URL retrieval

  • Use the sample from and receipt images included in the samples below (also available on GitHub) or you can use the above steps to get the SAS URL of an individual document in blob storage.

Analyze layout

You can use Form Recognizer to analyze tables, lines, and words in documents, without needing to train a model. For more information about layout extraction, see the Layout conceptual guide. To analyze the content of a file at a given URI, use the beginRecognizeContentFromUrl method.

async function recognizeContent() {
    const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/simple-invoice.png";
    const poller = await client.beginRecognizeContentFromUrl(formUrl);
    const pages = await poller.pollUntilDone();

    if (!pages || pages.length === 0) {
        throw new Error("Expecting non-empty list of pages!");
    }

    for (const page of pages) {
        console.log(
            `Page ${page.pageNumber}: width ${page.width} and height ${page.height} with unit ${page.unit}`
        );
        for (const table of page.tables) {
            for (const cell of table.cells) {
                console.log(`cell [${cell.rowIndex},${cell.columnIndex}] has text ${cell.text}`);
            }
        }
    }
}

recognizeContent().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Tip

You can also get content from a local file with FormRecognizerClient methods, such as beginRecognizeContent.

Output

Page 1: width 8.5 and height 11 with unit inch
cell [0,0] has text Invoice Number
cell [0,1] has text Invoice Date
cell [0,2] has text Invoice Due Date
cell [0,3] has text Charges
cell [0,5] has text VAT ID
cell [1,0] has text 34278587
cell [1,1] has text 6/18/2017
cell [1,2] has text 6/24/2017
cell [1,3] has text $56,651.49
cell [1,5] has text PT

Analyze receipts

This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the Receipts conceptual guide.

To analyze receipts from a URI, use the beginRecognizeReceiptsFromUrl method. The following code processes a receipt at the given URI and prints the major fields and values to the console.

async function recognizeReceipt() {
    receiptUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/tests/sample_forms/receipt/contoso-receipt.png";
    const poller = await client.beginRecognizeReceiptsFromUrl(receiptUrl, {
        onProgress: (state) => { console.log(`status: ${state.status}`); }
    });

    const receipts = await poller.pollUntilDone();

    if (!receipts || receipts.length <= 0) {
        throw new Error("Expecting at lease one receipt in analysis result");
    }

    const receipt = receipts[0];
    console.log("First receipt:");
    const receiptTypeField = receipt.fields["ReceiptType"];
    if (receiptTypeField.valueType === "string") {
        console.log(`  Receipt Type: '${receiptTypeField.value || "<missing>"}', with confidence of ${receiptTypeField.confidence}`);
    }
    const merchantNameField = receipt.fields["MerchantName"];
    if (merchantNameField.valueType === "string") {
        console.log(`  Merchant Name: '${merchantNameField.value || "<missing>"}', with confidence of ${merchantNameField.confidence}`);
    }
    const transactionDate = receipt.fields["TransactionDate"];
    if (transactionDate.valueType === "date") {
        console.log(`  Transaction Date: '${transactionDate.value || "<missing>"}', with confidence of ${transactionDate.confidence}`);
    }
    const itemsField = receipt.fields["Items"];
    if (itemsField.valueType === "array") {
        for (const itemField of itemsField.value || []) {
            if (itemField.valueType === "object") {
                const itemNameField = itemField.value["Name"];
                if (itemNameField.valueType === "string") {
                    console.log(`    Item Name: '${itemNameField.value || "<missing>"}', with confidence of ${itemNameField.confidence}`);
                }
            }
        }
    }
    const totalField = receipt.fields["Total"];
    if (totalField.valueType === "number") {
        console.log(`  Total: '${totalField.value || "<missing>"}', with confidence of ${totalField.confidence}`);
    }
}

recognizeReceipt().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Tip

You can also analyze local receipt images with FormRecognizerClient methods, such as beginRecognizeReceipts.

Output

status: notStarted
status: running
status: succeeded
First receipt:
  Receipt Type: 'Itemized', with confidence of 0.659
  Merchant Name: 'Contoso Contoso', with confidence of 0.516
  Transaction Date: 'Sun Jun 09 2019 17:00:00 GMT-0700 (Pacific Daylight Time)', with confidence of 0.985
    Item Name: '8GB RAM (Black)', with confidence of 0.916
    Item Name: 'SurfacePen', with confidence of 0.858
  Total: '1203.39', with confidence of 0.774

Analyze business cards

This section demonstrates how to analyze and extract common fields from English-language business cards, using a pre-trained model. For more information about business card analysis, see the Business cards conceptual guide.

To analyze business cards from a URL, use the beginRecognizeBusinessCardsFromURL method.

async function recognizeBusinessCards() {
    bcUrl = "https://github.com/Azure-Samples/cognitive-services-REST-api-samples/curl/form-recognizer/businessCard.png";
    const poller = await client.beginRecognizeBusinessCardsFromUrl(bcUrl, {
        onProgress: (state) => {
            console.log(`status: ${state.status}`);
        }
    });

    const [businessCard] = await poller.pollUntilDone();

    if (businessCard === undefined) {
        throw new Error("Failed to extract data from at least one business card.");
    }

    const contactNames = businessCard.fields["ContactNames"].value;
    if (Array.isArray(contactNames)) {
        console.log("- Contact Names:");
        for (const contactName of contactNames) {
            if (contactName.valueType === "object") {
                const firstName = contactName.value?.["FirstName"].value ?? "<no first name>";
                const lastName = contactName.value?.["LastName"].value ?? "<no last name>";
                console.log(`  - ${firstName} ${lastName} (${contactName.confidence} confidence)`);
            }
        }
    }

    printSimpleArrayField(businessCard, "CompanyNames");
    printSimpleArrayField(businessCard, "Departments");
    printSimpleArrayField(businessCard, "JobTitles");
    printSimpleArrayField(businessCard, "Emails");
    printSimpleArrayField(businessCard, "Websites");
    printSimpleArrayField(businessCard, "Addresses");
    printSimpleArrayField(businessCard, "MobilePhones");
    printSimpleArrayField(businessCard, "Faxes");
    printSimpleArrayField(businessCard, "WorkPhones");
    printSimpleArrayField(businessCard, "OtherPhones");
}

// Helper function to print array field values. 
function printSimpleArrayField(businessCard, fieldName) {
    const fieldValues = businessCard.fields[fieldName]?.value;
    if (Array.isArray(fieldValues)) {
        console.log(`- ${fieldName}:`);
        for (const item of fieldValues) {
            console.log(`  - ${item.value ?? "<no value>"} (${item.confidence} confidence)`);
        }
    } else if (fieldValues === undefined) {
        console.log(`No ${fieldName} were found in the document.`);
    } else {
        console.error(
            `Error: expected field "${fieldName}" to be an Array, but it was a(n) ${businessCard.fields[fieldName].valueType}`
        );
    }
}

recognizeBusinessCards().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Tip

You can also analyze local business card images with FormRecognizerClient methods, such as beginRecognizeBusinessCards.

Analyze invoices

This section demonstrates how to analyze and extract common fields from sales invoices, using a pre-trained model. For more information about invoice analysis, see the Invoice conceptual guide.

To analyze invoices from a URL, use the beginRecognizeInvoicesFromUrl method.

async function recognizeInvoices() {
    invoiceUrl = "https://github.com/Azure-Samples/cognitive-services-REST-api-samples/curl/form-recognizer/invoice_sample.jpg";

    const poller = await client.beginRecognizeInvoicesFromUrl(invoiceUrl, {
        onProgress: (state) => {
            console.log(`status: ${state.status}`);
        }
    });

    const [invoice] = await poller.pollUntilDone();
    if (invoice === undefined) {
        throw new Error("Failed to extract data from at least one invoice.");
    }

    // Helper function to print fields.
    function fieldToString(field) {
        const {
            name,
            valueType,
            value,
            confidence
        } = field;
        return `${name} (${valueType}): '${value}' with confidence ${confidence}'`;
    }

    console.log("Invoice fields:");

    for (const [name, field] of Object.entries(invoice.fields)) {
        if (field.valueType !== "array" && field.valueType !== "object") {
            console.log(`- ${name} ${fieldToString(field)}`);
        }
    }

    let idx = 0;

    console.log("- Items:");

    const items = invoice.fields["Items"]?.value;
    for (const item of items ?? []) {
        const value = item.value;

        const subFields = [
            "Description",
            "Quantity",
            "Unit",
            "UnitPrice",
            "ProductCode",
            "Date",
            "Tax",
            "Amount"
        ]
            .map((fieldName) => value[fieldName])
            .filter((field) => field !== undefined);

        console.log(
            [
                `  - Item #${idx}`,
                // Now we will convert those fields into strings to display
                ...subFields.map((field) => `    - ${fieldToString(field)}`)
            ].join("\n")
        );
    }
}

recognizeInvoices().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Tip

You can also analyze local receipt images with FormRecognizerClient methods, such as beginRecognizeInvoices.

Analyze identity documents

This section demonstrates how to analyze and extract key information from government-issued identification documents—worldwide passports and U.S. driver's licenses—using the Form Recognizer prebuilt ID model. For more information about ID document analysis, see our prebuilt identification model conceptual guide.

To analyze identity documents from a URL use the beginRecognizeIdDocumentsFromUrl method.

async function recognizeIdDocuments() {
    idUrl = "https://github.com/Azure-Samples/cognitive-services-REST-api-samples/curl/form-recognizer/id-license.jpg";
    const poller = await client.beginRecognizeIdDocumentsFromUrl(idUrl, {
        onProgress: (state) => {
            console.log(`status: ${state.status}`);
        }
    });

    const [idDocument] = await poller.pollUntilDone();

    if (idDocument === undefined) {
        throw new Error("Failed to extract data from at least one identity document.");
    }

    console.log("Document Type:", idDocument.formType);

    console.log("Identity Document Fields:");

    function printField(fieldName) {
        // Fields are extracted from the `fields` property of the document result
        const field = idDocument.fields[fieldName];
        console.log(
            `- ${fieldName} (${field?.valueType}): '${field?.value ?? "<missing>"}', with confidence ${field?.confidence
            }`
        );
    }

    printField("FirstName");
    printField("LastName");
    printField("DocumentNumber");
    printField("DateOfBirth");
    printField("DateOfExpiration");
    printField("Sex");
    printField("Address");
    printField("Country");
    printField("Region");
}

recognizeIdDocuments().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Train a custom model

This section demonstrates how to train a model with your own data. A trained model can output structured data that includes the key/value relationships in the original form document. After you train the model, you can test and retrain it and eventually use it to reliably extract data from more forms according to your needs.

Note

You can also train models with a graphical user interface (GUI) such as the Form Recognizer sample labeling tool.

Train a model without labels

Train custom models to analyze all the fields and values found in your custom forms without manually labeling the training documents.

The following function trains a model on a given set of documents and prints the model's status to the console.

async function trainModel() {

    const containerSasUrl = "<SAS-URL-of-your-form-folder-in-blob-storage>";

    const poller = await trainingClient.beginTraining(containerSasUrl, false, {
        onProgress: (state) => { console.log(`training status: ${state.status}`); }
    });
    const model = await poller.pollUntilDone();

    if (!model) {
        throw new Error("Expecting valid training result!");
    }

    console.log(`Model ID: ${model.modelId}`);
    console.log(`Status: ${model.status}`);
    console.log(`Training started on: ${model.trainingStartedOn}`);
    console.log(`Training completed on: ${model.trainingCompletedOn}`);

    if (model.submodels) {
        for (const submodel of model.submodels) {
            // since the training data is unlabeled, we are unable to return the accuracy of this model
            console.log("We have recognized the following fields");
            for (const key in submodel.fields) {
                const field = submodel.fields[key];
                console.log(`The model found field '${field.name}'`);
            }
        }
    }
    // Training document information
    if (model.trainingDocuments) {
        for (const doc of model.trainingDocuments) {
            console.log(`Document name: ${doc.name}`);
            console.log(`Document status: ${doc.status}`);
            console.log(`Document page count: ${doc.pageCount}`);
            console.log(`Document errors: ${doc.errors}`);
        }
    }
}

trainModel().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Output

This is the output for a model trained with the training data available from the JavaScript SDK. This sample output has been truncated for readability.

training status: creating
training status: ready
Model ID: 9d893595-1690-4cf2-a4b1-fbac0fb11909
Status: ready
Training started on: Thu Aug 20 2020 20:27:26 GMT-0700 (Pacific Daylight Time)
Training completed on: Thu Aug 20 2020 20:27:37 GMT-0700 (Pacific Daylight Time)
We have recognized the following fields
The model found field 'field-0'
The model found field 'field-1'
The model found field 'field-2'
The model found field 'field-3'
The model found field 'field-4'
The model found field 'field-5'
The model found field 'field-6'
The model found field 'field-7'
...
Document name: Form_1.jpg
Document status: succeeded
Document page count: 1
Document errors:
Document name: Form_2.jpg
Document status: succeeded
Document page count: 1
Document errors:
Document name: Form_3.jpg
Document status: succeeded
Document page count: 1
Document errors:
...

Train a model with labels

You can also train custom models by manually labeling the training documents. Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (\<filename\>.pdf.labels.json) in your blob storage container alongside the training documents. The Form Recognizer sample labeling tool provides a UI to help you create these label files. Once you have them, you can call the beginTraining method with the uselabels parameter set to true.

async function trainModelLabels() {

    const containerSasUrl = "<SAS-URL-of-your-form-folder-in-blob-storage>";

    const poller = await trainingClient.beginTraining(containerSasUrl, true, {
        onProgress: (state) => { console.log(`training status: ${state.status}`); }
    });
    const model = await poller.pollUntilDone();

    if (!model) {
        throw new Error("Expecting valid training result!");
    }

    console.log(`Model ID: ${model.modelId}`);
    console.log(`Status: ${model.status}`);
    console.log(`Training started on: ${model.trainingStartedOn}`);
    console.log(`Training completed on: ${model.trainingCompletedOn}`);

    if (model.submodels) {
        for (const submodel of model.submodels) {
            // since the training data is unlabeled, we are unable to return the accuracy of this model
            console.log("We have recognized the following fields");
            for (const key in submodel.fields) {
                const field = submodel.fields[key];
                console.log(`The model found field '${field.name}'`);
            }
        }
    }
    // Training document information
    if (model.trainingDocuments) {
        for (const doc of model.trainingDocuments) {
            console.log(`Document name: ${doc.name}`);
            console.log(`Document status: ${doc.status}`);
            console.log(`Document page count: ${doc.pageCount}`);
            console.log(`Document errors: ${doc.errors}`);
        }
    }
}

trainModelLabels().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Output

This is the output for a model trained with the training data available from the JavaScript SDK. This sample output has been truncated for readability.

training status: creating
training status: ready
Model ID: 789b1b37-4cc3-4e36-8665-9dde68618072
Status: ready
Training started on: Thu Aug 20 2020 20:30:37 GMT-0700 (Pacific Daylight Time)
Training completed on: Thu Aug 20 2020 20:30:43 GMT-0700 (Pacific Daylight Time)
We have recognized the following fields
The model found field 'CompanyAddress'
The model found field 'CompanyName'
The model found field 'CompanyPhoneNumber'
The model found field 'DatedAs'
...
Document name: Form_1.jpg
Document status: succeeded
Document page count: 1
Document errors: undefined
Document name: Form_2.jpg
Document status: succeeded
Document page count: 1
Document errors: undefined
Document name: Form_3.jpg
Document status: succeeded
Document page count: 1
Document errors: undefined
...

Analyze forms with a custom model

This section demonstrates how to extract key/value information and other content from your custom form types, using models you trained with your own forms.

Important

In order to implement this scenario, you must have already trained a model so you can pass its ID into the method below. See the Train a model section.

You'll use the beginRecognizeCustomFormsFromUrl method. The returned value is a collection of RecognizedForm objects: one for each page in the submitted document.

async function recognizeCustom() {
    // Model ID from when you trained your model.
    const modelId = "<modelId>";
    const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/simple-invoice.png";

    const poller = await client.beginRecognizeCustomForms(modelId, formUrl, {
        onProgress: (state) => { console.log(`status: ${state.status}`); }
    });
    const forms = await poller.pollUntilDone();

    console.log("Forms:");
    for (const form of forms || []) {
        console.log(`${form.formType}, page range: ${form.pageRange}`);
        console.log("Pages:");
        for (const page of form.pages || []) {
            console.log(`Page number: ${page.pageNumber}`);
            console.log("Tables");
            for (const table of page.tables || []) {
                for (const cell of table.cells) {
                    console.log(`cell (${cell.rowIndex},${cell.columnIndex}) ${cell.text}`);
                }
            }
        }

        console.log("Fields:");
        for (const fieldName in form.fields) {
            // each field is of type FormField
            const field = form.fields[fieldName];
            console.log(
                `Field ${fieldName} has value '${field.value}' with a confidence score of ${field.confidence}`
            );
        }
    }
}

recognizeCustom().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Tip

You can also analyze local files with FormRecognizerClient methods, such as beginRecognizeCustomForms.

Output

status: notStarted
status: succeeded
Forms:
custom:form, page range: [object Object]
Pages:
Page number: 1
Tables
cell (0,0) Invoice Number
cell (0,1) Invoice Date
cell (0,2) Invoice Due Date
cell (0,3) Charges
cell (0,5) VAT ID
cell (1,0) 34278587
cell (1,1) 6/18/2017
cell (1,2) 6/24/2017
cell (1,3) $56,651.49
cell (1,5) PT
Fields:
Field Merchant has value 'Invoice For:' with a confidence score of 0.116
Field CompanyPhoneNumber has value '$56,651.49' with a confidence score of 0.249
Field VendorName has value 'Charges' with a confidence score of 0.145
Field CompanyAddress has value '1 Redmond way Suite 6000 Redmond, WA' with a confidence score of 0.258
Field CompanyName has value 'PT' with a confidence score of 0.245
Field Website has value '99243' with a confidence score of 0.114
Field DatedAs has value 'undefined' with a confidence score of undefined
Field Email has value 'undefined' with a confidence score of undefined
Field PhoneNumber has value 'undefined' with a confidence score of undefined
Field PurchaseOrderNumber has value 'undefined' with a confidence score of undefined
Field Quantity has value 'undefined' with a confidence score of undefined
Field Signature has value 'undefined' with a confidence score of undefined
Field Subtotal has value 'undefined' with a confidence score of undefined
Field Tax has value 'undefined' with a confidence score of undefined
Field Total has value 'undefined' with a confidence score of undefined

Manage custom models

This section demonstrates how to manage the custom models stored in your account. The following code does all of the model management tasks in a single function, as an example.

Get number of models

The following code block gets the number of models currently in your account.

async function countModels() {
    // First, we see how many custom models we have, and what our limit is
    const accountProperties = await trainingClient.getAccountProperties();
    console.log(
        `Our account has ${accountProperties.customModelCount} custom models, and we can have at most ${accountProperties.customModelLimit} custom models`
    );
}
countModels().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Get list of models in account

The following code block provides a full list of available models in your account including information about when the model was created and its current status.

async function listModels() {

    // returns an async iteratable iterator that supports paging
    const result = trainingClient.listCustomModels();
    let i = 0;
    for await (const modelInfo of result) {
        console.log(`model ${i++}:`);
        console.log(modelInfo);
    }
}

listModels().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Output

model 0:
{
  modelId: '453cc2e6-e3eb-4e9f-aab6-e1ac7b87e09e',
  status: 'invalid',
  trainingStartedOn: 2020-08-20T22:28:52.000Z,
  trainingCompletedOn: 2020-08-20T22:28:53.000Z
}
model 1:
{
  modelId: '628739de-779c-473d-8214-d35c72d3d4f7',
  status: 'ready',
  trainingStartedOn: 2020-08-20T23:16:51.000Z,
  trainingCompletedOn: 2020-08-20T23:16:59.000Z
}
model 2:
{
  modelId: '789b1b37-4cc3-4e36-8665-9dde68618072',
  status: 'ready',
  trainingStartedOn: 2020-08-21T03:30:37.000Z,
  trainingCompletedOn: 2020-08-21T03:30:43.000Z
}
model 3:
{
  modelId: '9d893595-1690-4cf2-a4b1-fbac0fb11909',
  status: 'ready',
  trainingStartedOn: 2020-08-21T03:27:26.000Z,
  trainingCompletedOn: 2020-08-21T03:27:37.000Z
}

Get list of model IDs by page

This code block provides a paginated list of models and model IDs.

async function listModelsByPage() {
    // using `byPage()`
    i = 1;
    for await (const response of trainingClient.listCustomModels().byPage()) {
        for (const modelInfo of response.modelList) {
            console.log(`model ${i++}: ${modelInfo.modelId}`);
        }
    }
}
listModelsByPage().catch((err) => {
    console.error("The sample encountered an error:", err);
});

Output

model 1: 453cc2e6-e3eb-4e9f-aab6-e1ac7b87e09e
model 2: 628739de-779c-473d-8214-d35c72d3d4f7
model 3: 789b1b37-4cc3-4e36-8665-9dde68618072

Get model by ID

The following function takes a model ID and gets the matching model object. This function isn't called by default.

async function getModel(modelId) {
    // Now we'll get the first custom model in the paged list
    const model = await client.getCustomModel(modelId);
    console.log("--- First Custom Model ---");
    console.log(`Model Id: ${model.modelId}`);
    console.log(`Status: ${model.status}`);
    console.log("Documents used in training:");
    for (const doc of model.trainingDocuments || []) {
        console.log(`- ${doc.name}`);
    }
}

Delete a model from the resource account

You can also delete a model from your account by referencing its ID. This function deletes the model with the given ID. This function isn't called by default.

async function deleteModel(modelId) {
    await client.deleteModel(modelId);
    try {
        const deleted = await client.getCustomModel(modelId);
        console.log(deleted);
    } catch (err) {
        // Expected
        console.log(`Model with id ${modelId} has been deleted`);
    }
}

Output

Model with id 789b1b37-4cc3-4e36-8665-9dde68618072 has been deleted

Run the application

Run the application with the node command on your quickstart file.

node index.js

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Troubleshooting

Enable logs

You can set the following environment variable to see debug logs when using this library.

export DEBUG=azure*

For more detailed instructions on how to enable logs, see the @azure/logger package docs.

Next steps

In this quickstart, you used the Form Recognizer JavaScript client library to train models and analyze forms in different ways. Next, learn tips to create a better training data set and produce more accurate models.

See also

Important

  • This quickstart uses SDK version 3.1.1 and targets API version 2.1.

  • The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. See the reference documentation below.

Reference documentation | Library source code | Package (PyPi) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • Python 3.x
    • Your Python installation should include pip. You can check if you have pip installed by running pip --version on the command line. Get pip by installing the latest version of Python.
  • An Azure Storage blob that contains a set of training data. See Build a training data set for a custom model for tips and options for putting together your training data set. For this quickstart, you can use the files under the Train folder of the sample data set (download and extract sample_data.zip).
  • Once you have your Azure subscription, create a Form Recognizer resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. Later in the quickstart, you will paste your key and endpoint into the code below.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Install the client library

After installing Python, you can install the latest version of the Form Recognizer client library with:

pip install azure-ai-formrecognizer 

Create a new python application

Create a new Python application in your preferred editor or IDE. Then import the following libraries.

import os
from azure.core.exceptions import ResourceNotFoundError
from azure.ai.formrecognizer import FormRecognizerClient
from azure.ai.formrecognizer import FormTrainingClient
from azure.core.credentials import AzureKeyCredential

Tip

If you want to view the entire file with the code samples in this quickstart, you can find it on GitHub.

Create variables for your resource's Azure endpoint and key.

endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE"
key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE"

Object model

With Form Recognizer, you can create two different client types. The first, form_recognizer_client is used to query the service to recognize form fields and content. The second, form_training_client is used to create and manage custom models to improve recognition.

FormRecognizerClient

form_recognizer_client provides operations for:

  • Recognizing form fields and content using custom models trained to analyze your custom forms.
  • Recognizing form content, including tables, lines and words, without the need to train a model.
  • Recognizing common fields from receipts, using a pre-trained receipt model on the Form Recognizer service.

FormTrainingClient

form_training_client provides operations for:

  • Training custom models to analyze all fields and values found in your custom forms. See the service's documentation on unlabeled model training for a more detailed explanation of creating a training data set.
  • Training custom models to analyze specific fields and values you specify by labeling your custom forms. See the service's documentation on labeled model training for a more detailed explanation of applying labels to a training data set.
  • Managing models created in your account.
  • Copying a custom model from one Form Recognizer resource to another.

Note

Models can also be trained using a graphical user interface such as the Form Recognizer Labeling Tool.

Authenticate the client

Here, you'll authenticate two client objects using the subscription variables you defined above. You'll use an AzureKeyCredential object, so that if needed, you can update the API key without creating new client objects.

form_recognizer_client = FormRecognizerClient(endpoint, AzureKeyCredential(key))
form_training_client = FormTrainingClient(endpoint, AzureKeyCredential(key))

Get assets for testing

You'll need to add references to the URLs for your training and testing data.

    • To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the Storage Explorer tab. Navigate to your container, right-click, and select Get shared access signature. It's important to get the SAS for your container, not for the storage account itself. Make sure the Read, Write, Delete and List permissions are checked, and click Create. Then copy the value in the URL section to a temporary location. It should have the form: https://<storage account>.blob.core.windows.net/<container name>?<SAS value>.

    SAS URL retrieval

  • Use the sample form and receipt images included in the samples below (also available on GitHub or you can use the above steps to get the SAS URL of an individual document in blob storage.

Note

The code snippets in this guide use remote forms accessed by URLs. If you want to process local form documents instead, see the related methods in the reference documentation and samples.

Analyze layout

You can use Form Recognizer to analyze tables, lines, and words in documents, without needing to train a model. For more information about layout extraction see the Layout conceptual guide.

To analyze the content of a file at a given URL, use the begin_recognize_content_from_url method. The returned value is a collection of FormPage objects: one for each page in the submitted document. The following code iterates through these objects and prints the extracted key/value pairs and table data.

formUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/tests/sample_forms/forms/Form_1.jpg"

poller = form_recognizer_client.begin_recognize_content_from_url(formUrl)
page = poller.result()

table = page[0].tables[0] # page 1, table 1
print("Table found on page {}:".format(table.page_number))
for cell in table.cells:
    print("Cell text: {}".format(cell.text))
    print("Location: {}".format(cell.bounding_box))
    print("Confidence score: {}\n".format(cell.confidence))

Tip

You can also get content from local images with the FormRecognizerClient methods, such as begin_recognize_content.

Output

Table found on page 1:
Cell text: Invoice Number
Location: [Point(x=0.5075, y=2.8088), Point(x=1.9061, y=2.8088), Point(x=1.9061, y=3.3219), Point(x=0.5075, y=3.3219)]
Confidence score: 1.0

Cell text: Invoice Date
Location: [Point(x=1.9061, y=2.8088), Point(x=3.3074, y=2.8088), Point(x=3.3074, y=3.3219), Point(x=1.9061, y=3.3219)]
Confidence score: 1.0

Cell text: Invoice Due Date
Location: [Point(x=3.3074, y=2.8088), Point(x=4.7074, y=2.8088), Point(x=4.7074, y=3.3219), Point(x=3.3074, y=3.3219)]
Confidence score: 1.0

Cell text: Charges
Location: [Point(x=4.7074, y=2.8088), Point(x=5.386, y=2.8088), Point(x=5.386, y=3.3219), Point(x=4.7074, y=3.3219)]
Confidence score: 1.0
...

Analyze receipts

This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the Receipts conceptual guide. To analyze receipts from a URL, use the begin_recognize_receipts_from_url method.

receiptUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/tests/sample_forms/receipt/contoso-receipt.png"

poller = form_recognizer_client.begin_recognize_receipts_from_url(receiptUrl)
result = poller.result()

for receipt in result:
    for name, field in receipt.fields.items():
        if name == "Items":
            print("Receipt Items:")
            for idx, items in enumerate(field.value):
                print("...Item #{}".format(idx + 1))
                for item_name, item in items.value.items():
                    print("......{}: {} has confidence {}".format(item_name, item.value, item.confidence))
        else:
            print("{}: {} has confidence {}".format(name, field.value, field.confidence))

Tip

You can also analyze local receipt images with the FormRecognizerClient methods, such as begin_recognize_receipts.

Output

ReceiptType: Itemized has confidence 0.659
MerchantName: Contoso Contoso has confidence 0.516
MerchantAddress: 123 Main Street Redmond, WA 98052 has confidence 0.986
MerchantPhoneNumber: None has confidence 0.99
TransactionDate: 2019-06-10 has confidence 0.985
TransactionTime: 13:59:00 has confidence 0.968
Receipt Items:
...Item #1
......Name: 8GB RAM (Black) has confidence 0.916
......TotalPrice: 999.0 has confidence 0.559
...Item #2
......Quantity: None has confidence 0.858
......Name: SurfacePen has confidence 0.858
......TotalPrice: 99.99 has confidence 0.386
Subtotal: 1098.99 has confidence 0.964
Tax: 104.4 has confidence 0.713
Total: 1203.39 has confidence 0.774

Analyze business cards

This section demonstrates how to analyze and extract common fields from English business cards, using a pre-trained model. For more information about business card analysis, see the Business cards conceptual guide.

To analyze business cards from a URL, use the begin_recognize_business_cards_from_url method.

bcUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg"

poller = form_recognizer_client.begin_recognize_business_cards_from_url(bcUrl)
business_cards = poller.result()

for idx, business_card in enumerate(business_cards):
    print("--------Recognizing business card #{}--------".format(idx+1))
    contact_names = business_card.fields.get("ContactNames")
    if contact_names:
        for contact_name in contact_names.value:
            print("Contact First Name: {} has confidence: {}".format(
                contact_name.value["FirstName"].value, contact_name.value["FirstName"].confidence
            ))
            print("Contact Last Name: {} has confidence: {}".format(
                contact_name.value["LastName"].value, contact_name.value["LastName"].confidence
            ))
    company_names = business_card.fields.get("CompanyNames")
    if company_names:
        for company_name in company_names.value:
            print("Company Name: {} has confidence: {}".format(company_name.value, company_name.confidence))
    departments = business_card.fields.get("Departments")
    if departments:
        for department in departments.value:
            print("Department: {} has confidence: {}".format(department.value, department.confidence))
    job_titles = business_card.fields.get("JobTitles")
    if job_titles:
        for job_title in job_titles.value:
            print("Job Title: {} has confidence: {}".format(job_title.value, job_title.confidence))
    emails = business_card.fields.get("Emails")
    if emails:
        for email in emails.value:
            print("Email: {} has confidence: {}".format(email.value, email.confidence))
    websites = business_card.fields.get("Websites")
    if websites:
        for website in websites.value:
            print("Website: {} has confidence: {}".format(website.value, website.confidence))
    addresses = business_card.fields.get("Addresses")
    if addresses:
        for address in addresses.value:
            print("Address: {} has confidence: {}".format(address.value, address.confidence))
    mobile_phones = business_card.fields.get("MobilePhones")
    if mobile_phones:
        for phone in mobile_phones.value:
            print("Mobile phone number: {} has confidence: {}".format(phone.value, phone.confidence))
    faxes = business_card.fields.get("Faxes")
    if faxes:
        for fax in faxes.value:
            print("Fax number: {} has confidence: {}".format(fax.value, fax.confidence))
    work_phones = business_card.fields.get("WorkPhones")
    if work_phones:
        for work_phone in work_phones.value:
            print("Work phone number: {} has confidence: {}".format(work_phone.value, work_phone.confidence))
    other_phones = business_card.fields.get("OtherPhones")
    if other_phones:
        for other_phone in other_phones.value:
            print("Other phone number: {} has confidence: {}".format(other_phone.value, other_phone.confidence))

Tip

You can also analyze local business card images with the FormRecognizerClient methods, such as begin_recognize_business_cards.

Analyze invoices

This section demonstrates how to analyze and extract common fields from sales invoices, using a pre-trained model. For more information about invoice analysis, see the Invoice conceptual guide.

To analyze invoices from a URL, use the begin_recognize_invoices_from_url method.

invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/simple-invoice.png"

poller = form_recognizer_client.begin_recognize_invoices_from_url(invoiceUrl)
invoices = poller.result()

for idx, invoice in enumerate(invoices):
    print("--------Recognizing invoice #{}--------".format(idx+1))
    vendor_name = invoice.fields.get("VendorName")
    if vendor_name:
        print("Vendor Name: {} has confidence: {}".format(vendor_name.value, vendor_name.confidence))
    vendor_address = invoice.fields.get("VendorAddress")
    if vendor_address:
        print("Vendor Address: {} has confidence: {}".format(vendor_address.value, vendor_address.confidence))
    customer_name = invoice.fields.get("CustomerName")
    if customer_name:
        print("Customer Name: {} has confidence: {}".format(customer_name.value, customer_name.confidence))
    customer_address = invoice.fields.get("CustomerAddress")
    if customer_address:
        print("Customer Address: {} has confidence: {}".format(customer_address.value, customer_address.confidence))
    customer_address_recipient = invoice.fields.get("CustomerAddressRecipient")
    if customer_address_recipient:
        print("Customer Address Recipient: {} has confidence: {}".format(customer_address_recipient.value, customer_address_recipient.confidence))
    invoice_id = invoice.fields.get("InvoiceId")
    if invoice_id:
        print("Invoice Id: {} has confidence: {}".format(invoice_id.value, invoice_id.confidence))
    invoice_date = invoice.fields.get("InvoiceDate")
    if invoice_date:
        print("Invoice Date: {} has confidence: {}".format(invoice_date.value, invoice_date.confidence))
    invoice_total = invoice.fields.get("InvoiceTotal")
    if invoice_total:
        print("Invoice Total: {} has confidence: {}".format(invoice_total.value, invoice_total.confidence))
    due_date = invoice.fields.get("DueDate")
    if due_date:
        print("Due Date: {} has confidence: {}".format(due_date.value, due_date.confidence))

Tip

You can also analyze local invoice images with the FormRecognizerClient methods, such as begin_recognize_invoices.

Analyze identity documents

This section demonstrates how to analyze and extract key information from government-issued identification documents—worldwide passports and U.S. driver's licenses—using the Form Recognizer prebuilt ID model. For more information about identity document analysis, see our prebuilt identification model conceptual guide.

To analyze identity documents from a URL use the begin_recognize_id_documents_from_url method.

idURL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/id-license.jpg"

for idx, id_document in enumerate(id_documents):
    print("--------Recognizing ID document #{}--------".format(idx+1))
    first_name = id_document.fields.get("FirstName")
    if first_name:
        print("First Name: {} has confidence: {}".format(first_name.value, first_name.confidence))
    last_name = id_document.fields.get("LastName")
    if last_name:
        print("Last Name: {} has confidence: {}".format(last_name.value, last_name.confidence))
    document_number = id_document.fields.get("DocumentNumber")
    if document_number:
        print("Document Number: {} has confidence: {}".format(document_number.value, document_number.confidence))
    dob = id_document.fields.get("DateOfBirth")
    if dob:
        print("Date of Birth: {} has confidence: {}".format(dob.value, dob.confidence))
    doe = id_document.fields.get("DateOfExpiration")
    if doe:
        print("Date of Expiration: {} has confidence: {}".format(doe.value, doe.confidence))
    sex = id_document.fields.get("Sex")
    if sex:
        print("Sex: {} has confidence: {}".format(sex.value, sex.confidence))
    address = id_document.fields.get("Address")
    if address:
        print("Address: {} has confidence: {}".format(address.value, address.confidence))
    country_region = id_document.fields.get("CountryRegion")
    if country_region:
        print("Country/Region: {} has confidence: {}".format(country_region.value, country_region.confidence))
    region = id_document.fields.get("Region")
    if region:
        print("Region: {} has confidence: {}".format(region.value, region.confidence))

Tip

You can also analyze identity document images with the FormRecognizerClient methods, such as begin_recognize_identity_documents .

Train a custom model

This section demonstrates how to train a model with your own data. A trained model can output structured data that includes the key/value relationships in the original form document. After you train the model, you can test and retrain it and eventually use it to reliably extract data from more forms according to your needs.

Note

You can also train models with a graphical user interface such as the Form Recognizer sample labeling tool.

Train a model without labels

Train custom models to analyze all fields and values found in your custom forms without manually labeling the training documents.

The following code uses the training client with the begin_training function to train a model on a given set of documents. The returned CustomFormModel object contains information on the form types the model can analyze and the fields it can extract from each form type. The following code block prints this information to the console.

# To train a model you need an Azure Storage account.
# Use the SAS URL to access your training files.
trainingDataUrl = "PASTE_YOUR_SAS_URL_OF_YOUR_FORM_FOLDER_IN_BLOB_STORAGE_HERE"

poller = form_training_client.begin_training(trainingDataUrl, use_training_labels=False)
model = poller.result()

print("Model ID: {}".format(model.model_id))
print("Status: {}".format(model.status))
print("Training started on: {}".format(model.training_started_on))
print("Training completed on: {}".format(model.training_completed_on))

print("\nRecognized fields:")
for submodel in model.submodels:
    print(
        "The submodel with form type '{}' has recognized the following fields: {}".format(
            submodel.form_type,
            ", ".join(
                [
                    field.label if field.label else name
                    for name, field in submodel.fields.items()
                ]
            ),
        )
    )

# Training result information
for doc in model.training_documents:
    print("Document name: {}".format(doc.name))
    print("Document status: {}".format(doc.status))
    print("Document page count: {}".format(doc.page_count))
    print("Document errors: {}".format(doc.errors))

Output

This is the output for a model trained with the training data available from the Python SDK.

Model ID: 628739de-779c-473d-8214-d35c72d3d4f7
Status: ready
Training started on: 2020-08-20 23:16:51+00:00
Training completed on: 2020-08-20 23:16:59+00:00

Recognized fields:
The submodel with form type 'form-0' has recognized the following fields: Additional Notes:, Address:, Company Name:, Company Phone:, Dated As:, Details, Email:, Hero Limited, Name:, Phone:, Purchase Order, Purchase Order #:, Quantity, SUBTOTAL, Seattle, WA 93849 Phone:, Shipped From, Shipped To, TAX, TOTAL, Total, Unit Price, Vendor Name:, Website:
Document name: Form_1.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_2.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_3.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_4.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_5.jpg
Document status: succeeded
Document page count: 1
Document errors: []

Train a model with labels

You can also train custom models by manually labeling the training documents. Training with labels leads to better performance in some scenarios. The returned CustomFormModel indicates the fields the model can extract, along with its estimated accuracy in each field. The following code block prints this information to the console.

Important

To train with labels, you need to have special label information files (\<filename\>.pdf.labels.json) in your blob storage container alongside the training documents. The Form Recognizer sample labeling tool provides a UI to help you create these label files. Once you have them, you can call the begin_training function with the use_training_labels parameter set to true.

# To train a model you need an Azure Storage account.
# Use the SAS URL to access your training files.
trainingDataUrl = "PASTE_YOUR_SAS_URL_OF_YOUR_FORM_FOLDER_IN_BLOB_STORAGE_HERE"

poller = form_training_client.begin_training(trainingDataUrl, use_training_labels=True)
model = poller.result()
trained_model_id = model.model_id

print("Model ID: {}".format(trained_model_id))
print("Status: {}".format(model.status))
print("Training started on: {}".format(model.training_started_on))
print("Training completed on: {}".format(model.training_completed_on))

print("\nRecognized fields:")
for submodel in model.submodels:
    print(
        "The submodel with form type '{}' has recognized the following fields: {}".format(
            submodel.form_type,
            ", ".join(
                [
                    field.label if field.label else name
                    for name, field in submodel.fields.items()
                ]
            ),
        )
    )

# Training result information
for doc in model.training_documents:
    print("Document name: {}".format(doc.name))
    print("Document status: {}".format(doc.status))
    print("Document page count: {}".format(doc.page_count))
    print("Document errors: {}".format(doc.errors))

Output

This is the output for a model trained with the training data available from the Python SDK.

Model ID: ae636292-0b14-4e26-81a7-a0bfcbaf7c91

Status: ready
Training started on: 2020-08-20 23:20:56+00:00
Training completed on: 2020-08-20 23:20:57+00:00

Recognized fields:
The submodel with form type 'form-ae636292-0b14-4e26-81a7-a0bfcbaf7c91' has recognized the following fields: CompanyAddress, CompanyName, CompanyPhoneNumber, DatedAs, Email, Merchant, PhoneNumber, PurchaseOrderNumber, Quantity, Signature, Subtotal, Tax, Total, VendorName, Website
Document name: Form_1.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_2.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_3.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_4.jpg
Document status: succeeded
Document page count: 1
Document errors: []
Document name: Form_5.jpg
Document status: succeeded
Document page count: 1
Document errors: []

Analyze forms with a custom model

This section demonstrates how to extract key/value information and other content from your custom form types, using models you trained with your own forms.

Important

In order to implement this scenario, you must have already trained a model so you can pass its ID into the method below. See the Train a model section.

You'll use the begin_recognize_custom_forms_from_url method. The returned value is a collection of RecognizedForm objects: one for each page in the submitted document. The following code prints the analysis results to the console. It prints each recognized field and corresponding value, along with a confidence score.


poller = form_recognizer_client.begin_recognize_custom_forms_from_url(
    model_id=trained_model_id, form_url=formUrl)
result = poller.result()

for recognized_form in result:
    print("Form type: {}".format(recognized_form.form_type))
    for name, field in recognized_form.fields.items():
        print("Field '{}' has label '{}' with value '{}' and a confidence score of {}".format(
            name,
            field.label_data.text if field.label_data else name,
            field.value,
            field.confidence
        ))

Tip

You can also analyze local images. See the FormRecognizerClient methods, such as begin_recognize_custom_forms. Or, see the sample code on GitHub for scenarios involving local images.

Output

Using the model from the previous example, the following output is provided.

Form type: form-ae636292-0b14-4e26-81a7-a0bfcbaf7c91
Field 'Merchant' has label 'Merchant' with value 'Invoice For:' and a confidence score of 0.116
Field 'CompanyAddress' has label 'CompanyAddress' with value '1 Redmond way Suite 6000 Redmond, WA' and a confidence score of 0.258
Field 'Website' has label 'Website' with value '99243' and a confidence score of 0.114
Field 'VendorName' has label 'VendorName' with value 'Charges' and a confidence score of 0.145
Field 'CompanyPhoneNumber' has label 'CompanyPhoneNumber' with value '$56,651.49' and a confidence score of 0.249
Field 'CompanyName' has label 'CompanyName' with value 'PT' and a confidence score of 0.245
Field 'DatedAs' has label 'DatedAs' with value 'None' and a confidence score of None
Field 'Email' has label 'Email' with value 'None' and a confidence score of None
Field 'PhoneNumber' has label 'PhoneNumber' with value 'None' and a confidence score of None
Field 'PurchaseOrderNumber' has label 'PurchaseOrderNumber' with value 'None' and a confidence score of None
Field 'Quantity' has label 'Quantity' with value 'None' and a confidence score of None
Field 'Signature' has label 'Signature' with value 'None' and a confidence score of None
Field 'Subtotal' has label 'Subtotal' with value 'None' and a confidence score of None
Field 'Tax' has label 'Tax' with value 'None' and a confidence score of None
Field 'Total' has label 'Total' with value 'None' and a confidence score of None

Manage custom models

This section demonstrates how to manage the custom models stored in your account.

Check the number of models in the FormRecognizer resource account

The following code block checks how many models you have saved in your Form Recognizer account and compares it to the account limit.

account_properties = form_training_client.get_account_properties()
print("Our account has {} custom models, and we can have at most {} custom models".format(
    account_properties.custom_model_count, account_properties.custom_model_limit
))

Output

Our account has 5 custom models, and we can have at most 5000 custom models

List the models currently stored in the resource account

The following code block lists the current models in your account and prints their details to the console. It also saves a reference to the first model.

# Next, we get a paged list of all of our custom models
custom_models = form_training_client.list_custom_models()

print("We have models with the following ids:")

# Let's pull out the first model
first_model = next(custom_models)
print(first_model.model_id)
for model in custom_models:
    print(model.model_id)

Output

This is a sample output for the test account.

We have models with the following ids:
453cc2e6-e3eb-4e9f-aab6-e1ac7b87e09e
628739de-779c-473d-8214-d35c72d3d4f7
ae636292-0b14-4e26-81a7-a0bfcbaf7c91
b4b5df77-8538-4ffb-a996-f67158ecd305
c6309148-6b64-4fef-aea0-d39521452699

Get a specific model using the model's ID

The following code block uses the model ID saved from the previous section and uses it to retrieve details about the model.

custom_model = form_training_client.get_custom_model(model_id=trained_model_id)
print("Model ID: {}".format(custom_model.model_id))
print("Status: {}".format(custom_model.status))
print("Training started on: {}".format(custom_model.training_started_on))
print("Training completed on: {}".format(custom_model.training_completed_on))

Output

This is the sample output for the custom model created in the previous example.

Model ID: ae636292-0b14-4e26-81a7-a0bfcbaf7c91
Status: ready
Training started on: 2020-08-20 23:20:56+00:00
Training completed on: 2020-08-20 23:20:57+00:00

Delete a model from the resource account

You can also delete a model from your account by referencing its ID. This code deletes the model used in the previous section.

form_training_client.delete_model(model_id=custom_model.model_id)

try:
    form_training_client.get_custom_model(model_id=custom_model.model_id)
except ResourceNotFoundError:
    print("Successfully deleted model with id {}".format(custom_model.model_id))

Run the application

Run the application with the python command on your quickstart file.

python quickstart-file.py

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Troubleshooting

General

The Form Recognizer client library will raise exceptions defined in Azure Core.

Logging

This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, and so on) is logged at the INFO level.

Detailed DEBUG level logging, including request/response bodies and unredacted headers, can be enabled on a client with the logging_enable keyword argument:

import sys
import logging
from azure.ai.formrecognizer import FormRecognizerClient
from azure.core.credentials import AzureKeyCredential

# Create a logger for the 'azure' SDK
logger = logging.getLogger('azure')
logger.setLevel(logging.DEBUG)

# Configure a console output
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)

endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE"
credential = AzureKeyCredential("PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE")

# This client will log detailed information about its HTTP sessions, at DEBUG level
form_recognizer_client = FormRecognizerClient(endpoint, credential, logging_enable=True)

Similarly, logging_enable can enable detailed logging for a single operation, even when it isn't enabled for the client:

receiptUrl = "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/tests/sample_forms/receipt/contoso-receipt.png"
poller = form_recognizer_client.begin_recognize_receipts_from_url(receiptUrl, logging_enable=True)

REST samples on GitHub

Next steps

In this quickstart, you used the Form Recognizer Python client library to train models and analyze forms in different ways. Next, learn tips to create a better training data set and produce more accurate models.

Note

  • This quickstart targets Azure Form Recognizer API version 2.1 using cURL to execute REST API calls.

| Form Recognizer REST API | Azure REST API reference |

Prerequisites

  • cURL installed.
  • PowerShell version 6.0+, or a similar command-line application.
  • Azure subscription - Create one for free
  • An Azure Storage blob that contains a set of training data. See Build a training data set for a custom model for tips and options for putting together your training data set. For this quickstart, you can use the files under the Train folder of the sample data set (download and extract sample_data.zip).
  • Once you have your Azure subscription, create a Form Recognizer resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You will paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • A URL for an image of a receipt. You can use a sample image for this quickstart.
  • A URL for an image of a business card. You can use a sample image for this quickstart.
  • A URL for an image of an invoice. You can use a sample document for this quickstart.
  • A URL for an image of an identity document. You can use a sample image

Analyze layout

You can use Form Recognizer to analyze and extract tables, selection marks, text, and structure in documents, without needing to train a model. For more information about layout extraction, see the Layout conceptual guide. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {subscription key} with the subscription key you copied from the previous step.
  3. Replace \"{your-document-url} with one of the example URLs.

Request

curl -v -i POST "https://{endpoint}/formrecognizer/v2.1/layout/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{​​​​​​​'source': '{your-document-url}'}​​​​​​​​"

Operation-Location

You'll receive a 202 (Success) response that includes an Operation-Location header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:

https://cognitiveservice/formrecognizer/v2.1/layout/analyzeResults/{resultId}.

In the following example, as part of the URL, the string after analyzeResults/ is the result ID.

https://cognitiveservice/formrecognizer/v2/layout/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb

Get layout results

After you've called the Analyze Layout API, you call the Get Analyze Layout Result API to get the status of the operation and the extracted data. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {subscription key} with the subscription key you copied from the previous step.
  3. Replace {resultId} with the result ID from the previous step.

Request

curl -v -X GET "https://{endpoint}/formrecognizer/v2.1/layout/analyzeResults/{resultId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

Examine the results

You'll receive a 200 (success) response with JSON content.

See the following invoice image and its corresponding JSON output.

  • The "readResults" node contains every line of text with its respective bounding box placement on the page.
  • The selectionMarks node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
  • The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.

Contoso project statement document with a table.

Response body

This output has been shortened for simplicity. See the full sample output on GitHub.

{
    "status": "succeeded",
    "createdDateTime": "2020-08-20T20:40:50Z",
    "lastUpdatedDateTime": "2020-08-20T20:40:55Z",
    "analyzeResult": {
        "version": "2.1.0",
        "readResults": [
            {
                "page": 1,
                "angle": 0,
                "width": 8.5,
                "height": 11,
                "unit": "inch",
                "lines": [
                    {
                        "boundingBox": [
                            0.5826,
                            0.4411,
                            2.3387,
                            0.4411,
                            2.3387,
                            0.7969,
                            0.5826,
                            0.7969
                        ],
                        "text": "Contoso, Ltd.",
                        "words": [
                            {
                                "boundingBox": [
                                    0.5826,
                                    0.4411,
                                    1.744,
                                    0.4411,
                                    1.744,
                                    0.7969,
                                    0.5826,
                                    0.7969
                                ],
                                "text": "Contoso,",
                                "confidence": 1
                            },
                            {
                                "boundingBox": [
                                    1.8448,
                                    0.4446,
                                    2.3387,
                                    0.4446,
                                    2.3387,
                                    0.7631,
                                    1.8448,
                                    0.7631
                                ],
                                "text": "Ltd.",
                                "confidence": 1
                            }
                        ]
                    },
                    ...
                        ]
                    }
                ],
                "selectionMarks": [
                    {
                        "boundingBox": [
                            3.9737,
                            3.7475,
                            4.1693,
                            3.7475,
                            4.1693,
                            3.9428,
                            3.9737,
                            3.9428
                        ],
                        "confidence": 0.989,
                        "state": "selected"
                    },
                    ...
                ]
            }
        ],
        "pageResults": [
            {
                "page": 1,
                "tables": [
                    {
                        "rows": 5,
                        "columns": 5,
                        "cells": [
                            {
                                "rowIndex": 0,
                                "columnIndex": 0,
                                "text": "Training Date",
                                "boundingBox": [
                                    0.5133,
                                    4.2167,
                                    1.7567,
                                    4.2167,
                                    1.7567,
                                    4.4492,
                                    0.5133,
                                    4.4492
                                ],
                                "elements": [
                                    "#/readResults/0/lines/12/words/0",
                                    "#/readResults/0/lines/12/words/1"
                                ]
                            },
                            ...
                        ]
                    },
                    ...
                ]
            }
        ]
    }
}

Analyze receipts

This section demonstrates how to analyze and extract common fields from US receipts, using a pre-trained receipt model. For more information about receipt analysis, see the Receipts conceptual guide. To start analyzing a receipt, call the Analyze Receipt API using the cURL command below. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {your receipt URL} with the URL address of a receipt image.
  3. Replace {subscription key> with the subscription key you copied from the previous step.

Request

curl -i -X POST "https://{endpoint}/formrecognizer/v2.1/prebuilt/receipt/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ 'source': '{your receipt URL}'}"

Operation-Location

You'll receive a 202 (Success) response that includes an Operation-Location header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:

https://cognitiveservice/formrecognizer/v2.1/prebuilt/receipt/analyzeResults/{resultId}

In the following example, the string after operations/ is the result ID:

https://cognitiveservice/formrecognizer/v2.1/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb

Get receipt results

After you've called the Analyze Receipt API, you call the Get Analyze Receipt Result API to get the status of the operation and the extracted data. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource Overview tab.
  2. Replace {resultId} with the result ID from the previous step.
  3. Replace {subscription key} with your subscription key.

Request

curl -X GET "https://{endpoint}/formrecognizer/v2.1/prebuilt/receipt/analyzeResults/{resultId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

Examine the response

You'll receive a 200 (Success) response with JSON output. The first field, "status", indicates the status of the operation. If the operation is not complete, the value of "status" will be "running" or "notStarted", and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.

The "readResults" node contains all of the recognized text (if you set the optional includeTextDetails parameter to true). Text is organized by page, then by line, then by individual words. The "documentResults" node contains the receipt-specific values that the model discovered. This is where you'll find useful key/value pairs like the tax, total, merchant address, and so on.

See the following receipt image and its corresponding JSON output.

A receipt from Contoso store

Response body

This output has been shortened for readability. See the full sample output on GitHub.

{
  "status":"succeeded",
  "createdDateTime":"2019-12-17T04:11:24Z",
  "lastUpdatedDateTime":"2019-12-17T04:11:32Z",
  "analyzeResult":{
    "version":"2.1.0",
    "readResults":[
      {
        "page":1,
        "angle":0.6893,
        "width":1688,
        "height":3000,
        "unit":"pixel",
        "language":"en",
        "lines":[
          {
            "text":"Contoso",
            "boundingBox":[
              635,
              510,
              1086,
              461,
              1098,
              558,
              643,
              604
            ],
            "words":[
              {
                "text":"Contoso",
                "boundingBox":[
                  639,
                  510,
                  1087,
                  461,
                  1098,
                  551,
                  646,
                  604
                ],
                "confidence":0.955
              }
            ]
          },
          ...
        ]
      }
    ],
    "documentResults":[
      {
        "docType":"prebuilt:receipt",
        "pageRange":[
          1,
          1
        ],
        "fields":{
          "ReceiptType":{
            "type":"string",
            "valueString":"Itemized",
            "confidence":0.692
          },
          "MerchantName":{
            "type":"string",
            "valueString":"Contoso Contoso",
            "text":"Contoso Contoso",
            "boundingBox":[
              378.2,
              292.4,
              1117.7,
              468.3,
              1035.7,
              812.7,
              296.3,
              636.8
            ],
            "page":1,
            "confidence":0.613,
            "elements":[
              "#/readResults/0/lines/0/words/0",
              "#/readResults/0/lines/1/words/0"
            ]
          },
          "MerchantAddress":{
            "type":"string",
            "valueString":"123 Main Street Redmond, WA 98052",
            "text":"123 Main Street Redmond, WA 98052",
            "boundingBox":[
              302,
              675.8,
              848.1,
              793.7,
              809.9,
              970.4,
              263.9,
              852.5
            ],
            "page":1,
            "confidence":0.99,
            "elements":[
              "#/readResults/0/lines/2/words/0",
              "#/readResults/0/lines/2/words/1",
              "#/readResults/0/lines/2/words/2",
              "#/readResults/0/lines/3/words/0",
              "#/readResults/0/lines/3/words/1",
              "#/readResults/0/lines/3/words/2"
            ]
          },
          "MerchantPhoneNumber":{
            "type":"phoneNumber",
            "valuePhoneNumber":"+19876543210",
            "text":"987-654-3210",
            "boundingBox":[
              278,
              1004,
              656.3,
              1054.7,
              646.8,
              1125.3,
              268.5,
              1074.7
            ],
            "page":1,
            "confidence":0.99,
            "elements":[
              "#/readResults/0/lines/4/words/0"
            ]
          },
          "TransactionDate":{
            "type":"date",
            "valueDate":"2019-06-10",
            "text":"6/10/2019",
            "boundingBox":[
              265.1,
              1228.4,
              525,
              1247,
              518.9,
              1332.1,
              259,
              1313.5
            ],
            "page":1,
            "confidence":0.99,
            "elements":[
              "#/readResults/0/lines/5/words/0"
            ]
          },
          "TransactionTime":{
            "type":"time",
            "valueTime":"13:59:00",
            "text":"13:59",
            "boundingBox":[
              541,
              1248,
              677.3,
              1261.5,
              668.9,
              1346.5,
              532.6,
              1333
            ],
            "page":1,
            "confidence":0.977,
            "elements":[
              "#/readResults/0/lines/5/words/1"
            ]
          },
          "Items":{
            "type":"array",
            "valueArray":[
              {
                "type":"object",
                "valueObject":{
                  "Quantity":{
                    "type":"number",
                    "text":"1",
                    "boundingBox":[
                      245.1,
                      1581.5,
                      300.9,
                      1585.1,
                      295,
                      1676,
                      239.2,
                      1672.4
                    ],
                    "page":1,
                    "confidence":0.92,
                    "elements":[
                      "#/readResults/0/lines/7/words/0"
                    ]
                  },
                  "Name":{
                    "type":"string",
                    "valueString":"Cappuccino",
                    "text":"Cappuccino",
                    "boundingBox":[
                      322,
                      1586,
                      654.2,
                      1601.1,
                      650,
                      1693,
                      317.8,
                      1678
                    ],
                    "page":1,
                    "confidence":0.923,
                    "elements":[
                      "#/readResults/0/lines/7/words/1"
                    ]
                  },
                  "TotalPrice":{
                    "type":"number",
                    "valueNumber":2.2,
                    "text":"$2.20",
                    "boundingBox":[
                      1107.7,
                      1584,
                      1263,
                      1574,
                      1268.3,
                      1656,
                      1113,
                      1666
                    ],
                    "page":1,
                    "confidence":0.918,
                    "elements":[
                      "#/readResults/0/lines/8/words/0"
                    ]
                  }
                }
              },
              ...
            ]
          },
          "Subtotal":{
            "type":"number",
            "valueNumber":11.7,
            "text":"11.70",
            "boundingBox":[
              1146,
              2221,
              1297.3,
              2223,
              1296,
              2319,
              1144.7,
              2317
            ],
            "page":1,
            "confidence":0.955,
            "elements":[
              "#/readResults/0/lines/13/words/1"
            ]
          },
          "Tax":{
            "type":"number",
            "valueNumber":1.17,
            "text":"1.17",
            "boundingBox":[
              1190,
              2359,
              1304,
              2359,
              1304,
              2456,
              1190,
              2456
            ],
            "page":1,
            "confidence":0.979,
            "elements":[
              "#/readResults/0/lines/15/words/1"
            ]
          },
          "Tip":{
            "type":"number",
            "valueNumber":1.63,
            "text":"1.63",
            "boundingBox":[
              1094,
              2479,
              1267.7,
              2485,
              1264,
              2591,
              1090.3,
              2585
            ],
            "page":1,
            "confidence":0.941,
            "elements":[
              "#/readResults/0/lines/17/words/1"
            ]
          },
          "Total":{
            "type":"number",
            "valueNumber":14.5,
            "text":"$14.50",
            "boundingBox":[
              1034.2,
              2617,
              1387.5,
              2638.2,
              1380,
              2763,
              1026.7,
              2741.8
            ],
            "page":1,
            "confidence":0.985,
            "elements":[
              "#/readResults/0/lines/19/words/0"
            ]
          }
        }
      }
    ]
  }
}

Analyze business cards

This section demonstrates how to analyze and extract common fields from English business cards, using a pre-trained model. For more information about business card analysis, see the Business cards conceptual guide. To start analyzing a business card, you call the Analyze Business Card API using the cURL command below. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {your business card URL} with the URL address of a receipt image.
  3. Replace {subscription key} with the subscription key you copied from the previous step.

Request

curl -i -X POST "https://{endpoint}/formrecognizer/v2.1/prebuilt/businessCard/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ 'source': '{your receipt URL}'}"

Operation-Location

You'll receive a 202 (Success) response that includes an Operation-Location header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:

https://cognitiveservice/formrecognizer/v2.1/prebuilt/businessCard/analyzeResults/{resultId}

In the following example, as part of the URL, the string after analyzeResults/ is the result ID.

https://cognitiveservice/formrecognizer/v2.1/prebuilt/businessCard/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb

Get business card results

After you've called the Analyze Business Card API, you call the Get Analyze Business Card Result API to get the status of the operation and the extracted data. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource Overview tab.
  2. Replace {resultId} with the result ID from the previous step.
  3. Replace {subscription key} with your subscription key.
curl -v -X GET https://{endpoint}/formrecognizer/v2.1/prebuilt/businessCard/analyzeResults/{resultId}"
-H "Ocp-Apim-Subscription-Key: {subscription key}"

Examine the response

You'll receive a 200 (Success) response with JSON output.

The "readResults" node contains all of the recognized text. Text is organized by page, then by line, then by individual words. The "documentResults" node contains the business-card-specific values that the model discovered. This is where you'll find useful contact information like the company name, first name, last name, phone number, and so on.

A business card from Contoso company

This sample JSON output has been shortened for readability. See the full sample output on GitHub.

{
    "status": "succeeded",
    "createdDateTime":"2021-02-09T18:14:05Z",
    "lastUpdatedDateTime":"2021-02-09T18:14:10Z",
    "analyzeResult": {
        "version": "2.1.0",
        "readResults": [
            {
             "page":1,
             "angle":-16.6836,
             "width":4032,
             "height":3024,
             "unit":"pixel"
          }
        ],
        "documentResults": [
            {
                "docType": "prebuilt:businesscard",
                "pageRange": [
                    1,
                    1
                ],
                "fields": {
                    "ContactNames": {
                        "type": "array",
                        "valueArray": [
                            {
                                "type": "object",
                                "valueObject": {
                                    "FirstName": {
                                        "type": "string",
                                        "valueString": "Avery",
                                        "text": "Avery",
                                        "boundingBox": [
                                            703,
                                            1096,
                                            1134,
                                            989,
                                            1165,
                                            1109,
                                            733,
                                            1206
                                        ],
                                        "page": 1
                                },
                                "text": "Dr. Avery Smith",
                                "boundingBox": [
                                    419.3,
                                    1154.6,
                                    1589.6,
                                    877.9,
                                    1618.9,
                                    1001.7,
                                    448.6,
                                    1278.4
                                ],
                                "confidence": 0.993
                            }
                        ]
                    },
                    "Emails": {
                        "type": "array",
                        "valueArray": [
                            {
                                "type": "string",
                                "valueString": "avery.smith@contoso.com",
                                "text": "avery.smith@contoso.com",
                                "boundingBox": [
                                    2107,
                                    934,
                                    2917,
                                    696,
                                    2935,
                                    764,
                                    2126,
                                    995
                                ],
                                "page": 1,
                                "confidence": 0.99
                            }
                        ]
                    },
                    "Websites": {
                        "type": "array",
                        "valueArray": [
                            {
                                "type": "string",
                                "valueString": "https://www.contoso.com/",
                                "text": "https://www.contoso.com/",
                                "boundingBox": [
                                    2121,
                                    1002,
                                    2992,
                                    755,
                                    3014,
                                    826,
                                    2143,
                                    1077
                                ],
                                "page": 1,
                                "confidence": 0.995
                            }
                        ]
                    }
                }
            }
        ]
    }
}

The script will print responses to the console until the Analyze Business Card operation completes.

Analyze invoices

You can use Form Recognizer to extract field text and semantic values from a given invoice document. To start analyzing an invoice, use the cURL command below. For more information about invoice analysis, see the Invoice conceptual guide. To start analyzing an invoice, call the Analyze Invoice API using the cURL command below. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {your invoice URL} with the URL address of an invoice document.
  3. Replace {subscription key} with your subscription key.

Request

curl -v -i POST https://{endpoint}/formrecognizer/v2.1/prebuilt/invoice/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key:  {subscription key}" --data-ascii "{​​​​​​​'source': '{your invoice URL}'}​​​​​​​​"

Operation-Location

You'll receive a 202 (Success) response that includes an Operation-Location header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:

https://cognitiveservice/formrecognizer/v2.1/prebuilt/receipt/analyzeResults/{resultId}

In the following example, as part of the URL, the string after analyzeResults/ is the result ID:

https://cognitiveservice/formrecognizer/v2.1/prebuilt/invoice/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb

Get invoice results

After you've called the Analyze Invoice API, you call the Get Analyze Invoice Result API to get the status of the operation and the extracted data. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource Overview tab.
  2. Replace {resultId} with the result ID from the previous step.
  3. Replace {subscription key} with your subscription key.

Request

curl -v -X GET "https://{endpoint}/formrecognizer/v2.1/prebuilt/invoice/analyzeResults/{resultId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

Examine the response

You'll receive a 200 (Success) response with JSON output.

  • The "readResults" field contains every line of text that was extracted from the invoice.
  • The "pageResults" includes the tables and selections marks extracted from the invoice.
  • The "documentResults" field contains key/value information for the most relevant parts of the invoice.

See the following invoice document and its corresponding JSON output.

Response body

This JSON content has been shortened for readability. See the full sample output on GitHub.

{
    "status": "succeeded",
    "createdDateTime": "2020-11-06T23:32:11Z",
    "lastUpdatedDateTime": "2020-11-06T23:32:20Z",
    "analyzeResult": {
        "version": "2.1.0",
        "readResults": [{
            "page": 1,
            "angle": 0,
            "width": 8.5,
            "height": 11,
            "unit": "inch"
        }],
        "pageResults": [{
            "page": 1,
            "tables": [{
                "rows": 3,
                "columns": 4,
                "cells": [{
                    "rowIndex": 0,
                    "columnIndex": 0,
                    "text": "QUANTITY",
                    "boundingBox": [0.4953,
                    5.7306,
                    1.8097,
                    5.7306,
                    1.7942,
                    6.0122,
                    0.4953,
                    6.0122]
                },
                {
                    "rowIndex": 0,
                    "columnIndex": 1,
                    "text": "DESCRIPTION",
                    "boundingBox": [1.8097,
                    5.7306,
                    5.7529,
                    5.7306,
                    5.7452,
                    6.0122,
                    1.7942,
                    6.0122]
                },
                ...
                ],
                "boundingBox": [0.4794,
                5.7132,
                8.0158,
                5.714,
                8.0118,
                6.5627,
                0.4757,
                6.5619]
            },
            {
                "rows": 2,
                "columns": 6,
                "cells": [{
                    "rowIndex": 0,
                    "columnIndex": 0,
                    "text": "SALESPERSON",
                    "boundingBox": [0.4979,
                    4.963,
                    1.8051,
                    4.963,
                    1.7975,
                    5.2398,
                    0.5056,
                    5.2398]
                },
                {
                    "rowIndex": 0,
                    "columnIndex": 1,
                    "text": "P.O. NUMBER",
                    "boundingBox": [1.8051,
                    4.963,
                    3.3047,
                    4.963,
                    3.3124,
                    5.2398,
                    1.7975,
                    5.2398]
                },
                ...
                ],
                "boundingBox": [0.4976,
                4.961,
                7.9959,
                4.9606,
                7.9959,
                5.5204,
                0.4972,
                5.5209]
            }]
        }],
        "documentResults": [{
            "docType": "prebuilt:invoice",
            "pageRange": [1,
            1],
            "fields": {
                "AmountDue": {
                    "type": "number",
                    "valueNumber": 610,
                    "text": "$610.00",
                    "boundingBox": [7.3809,
                    7.8153,
                    7.9167,
                    7.8153,
                    7.9167,
                    7.9591,
                    7.3809,
                    7.9591],
                    "page": 1,
                    "confidence": 0.875
                },
                "BillingAddress": {
                    "type": "string",
                    "valueString": "123 Bill St, Redmond WA, 98052",
                    "text": "123 Bill St, Redmond WA, 98052",
                    "boundingBox": [0.594,
                    4.3724,
                    2.0125,
                    4.3724,
                    2.0125,
                    4.7125,
                    0.594,
                    4.7125],
                    "page": 1,
                    "confidence": 0.997
                },
                "BillingAddressRecipient": {
                    "type": "string",
                    "valueString": "Microsoft Finance",
                    "text": "Microsoft Finance",
                    "boundingBox": [0.594,
                    4.1684,
                    1.7907,
                    4.1684,
                    1.7907,
                    4.2837,
                    0.594,
                    4.2837],
                    "page": 1,
                    "confidence": 0.998
                },
                ...
            }
        }]
    }
}

Analyze identity (ID) documents

To start analyzing an identification document, use the cURL command below. For more information about identity document analysis, see the Identity documents conceptual guide. To start analyzing an identity document, you call the Analyze ID Document API using the cURL command below. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {your ID document URL} with the URL address of a receipt image.
  3. Replace {subscription key} with the subscription key you copied from the previous step.

Request

curl -i -X POST "https://{endpoint}/formrecognizer/v2.1/prebuilt/idDocument/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ 'source': '{your identity document URL}'}"

Operation-Location

You'll receive a 202 (Success) response that includes an Operation-Location header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:

https://cognitiveservice/formrecognizer/v2.1/prebuilt/documentId/analyzeResults/{resultId}

In the following example, the string after analyzeResults/ is the result ID:

https://westus.api.cognitive.microsoft.com/formrecognizer/v2.1/prebuilt/idDocument/analyzeResults/83d0137b-28e1-4051-98ce-42bd21f77ae0

Get the Analyze ID Document result

After you've called the Analyze ID Document API, call the Get Analyze ID Document Result API to get the status of the operation and the extracted data. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource Overview tab.
  2. Replace {resultId} with the result ID from the previous step.
  3. Replace {subscription key} with your subscription key.

Request

curl -X GET "https://{endpoint}/formrecognizer/v2.1/prebuilt/businessCard/analyzeResults/{resultId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

Examine the response

You'll receive a 200 (Success) response with JSON output. The first field, "status", indicates the status of the operation. If the operation is not complete, the value of "status" will be "running" or "notStarted", and you should call the API again, either manually or through a script until you receive the succeeded value. We recommend an interval of one second or more between calls.

  • The "readResults" field contains every line of text that was extracted from the identity document.
  • The "documentResults" field contains an array of objects, each representing an ID document detected in the input document.

Below is a sample identity document and its corresponding JSON output

  • sample driver's license

Response body

{
    "status": "succeeded",
    "createdDateTime": "2021-04-13T17:24:52Z",
    "lastUpdatedDateTime": "2021-04-13T17:24:55Z",
    "analyzeResult": {
      "version": "2.1.0",
      "readResults": [
        {
          "page": 1,
          "angle": -0.2823,
          "width": 450,
          "height": 294,
          "unit": "pixel"
        }
      ],
      "documentResults": [
        {
          "docType": "prebuilt:idDocument:driverLicense",
          "docTypeConfidence": 0.995,
          "pageRange": [
            1,
            1
          ],
          "fields": {
            "Address": {
              "type": "string",
              "valueString": "123 STREET ADDRESS YOUR CITY WA 99999-1234",
              "text": "123 STREET ADDRESS YOUR CITY WA 99999-1234",
              "boundingBox": [
                158,
                151,
                326,
                151,
                326,
                177,
                158,
                177
              ],
              "page": 1,
              "confidence": 0.965
            },
            "CountryRegion": {
              "type": "countryRegion",
              "valueCountryRegion": "USA",
              "confidence": 0.99
            },
            "DateOfBirth": {
              "type": "date",
              "valueDate": "1958-01-06",
              "text": "01/06/1958",
              "boundingBox": [
                187,
                133,
                272,
                132,
                272,
                148,
                187,
                149
              ],
              "page": 1,
              "confidence": 0.99
            },
            "DateOfExpiration": {
              "type": "date",
              "valueDate": "2020-08-12",
              "text": "08/12/2020",
              "boundingBox": [
                332,
                230,
                414,
                228,
                414,
                244,
                332,
                245
              ],
              "page": 1,
              "confidence": 0.99
            },
            "DocumentNumber": {
              "type": "string",
              "valueString": "LICWDLACD5DG",
              "text": "LIC#WDLABCD456DG",
              "boundingBox": [
                162,
                70,
                307,
                68,
                307,
                84,
                163,
                85
              ],
              "page": 1,
              "confidence": 0.99
            },
            "FirstName": {
              "type": "string",
              "valueString": "LIAM R.",
              "text": "LIAM R.",
              "boundingBox": [
                158,
                102,
                216,
                102,
                216,
                116,
                158,
                116
              ],
              "page": 1,
              "confidence": 0.985
            },
            "LastName": {
              "type": "string",
              "valueString": "TALBOT",
              "text": "TALBOT",
              "boundingBox": [
                160,
                86,
                213,
                85,
                213,
                99,
                160,
                100
              ],
              "page": 1,
              "confidence": 0.987
            },
            "Region": {
              "type": "string",
              "valueString": "Washington",
              "confidence": 0.99
            },
            "Sex": {
              "type": "string",
              "valueString": "M",
              "text": "M",
              "boundingBox": [
                226,
                190,
                232,
                190,
                233,
                201,
                226,
                201
              ],
              "page": 1,
              "confidence": 0.99
            }
          }
        }
      ]
    }
  }

Train a custom model

To train a custom model, you'll need a set of training data in an Azure Storage blob. You need a minimum of five filled-in forms (PDF documents and/or images) of the same type/structure. See Build a training data set for a custom model for tips and options for putting together your training data.

Training without labeled data is the default operation and is simpler. Alternatively, you can manually label some or all of your training data beforehand. This is a more complex process but results in a better trained model.

Note

You can also train models with a graphical user interface such as the Form Recognizer sample labeling tool.

Train a model without labels

To train a Form Recognizer model with the documents in your Azure blob container, call the Train Custom Model API by running the following cURL command. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.

  2. Replace {subscription key} with the subscription key you copied from the previous step.

  3. Replace {SAS URL} with the Azure Blob storage container's shared access signature (SAS) URL. * To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the Storage Explorer tab. Navigate to your container, right-click, and select Get shared access signature. It's important to get the SAS for your container, not for the storage account itself. Make sure the Read, Write, Delete and List permissions are checked, and click Create. Then copy the value in the URL section to a temporary location. It should have the form: https://<storage account>.blob.core.windows.net/<container name>?<SAS value>.

    SAS URL retrieval

Request

curl -i -X POST "https://{endpoint}/formrecognizer/v2.1/custom/models" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ 'source': '{SAS URL}'}"

Location

You'll receive a 201 (Success) response with a Location header. The value of this header contains a model ID for the newly trained model that you can use to query the status of the operation and get the results:

https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}

In the following example, as part of the URL, the string after models/ is the model ID.

https://westus.api.cognitive.microsoft.com/formrecognizer/v2.1/custom/models/77d8ecad-b8c1-427e-ac20-a3fe4af503e9

Train a model with labels

To train with labels, you need to have special label information files (\<filename\>.pdf.labels.json) in your blob storage container alongside the training documents. The Form Recognizer sample labeling tool provides a UI to help you create these label files. Once you have them, you can call the Train Custom Model API, with the "useLabelFile" parameter set to true in the JSON body.

Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.

  2. Replace {subscription key} with the subscription key you copied from the previous step.

  3. Replace {SAS URL} with the Azure Blob storage container's shared access signature (SAS) URL. * To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the Storage Explorer tab. Navigate to your container, right-click, and select Get shared access signature. It's important to get the SAS for your container, not for the storage account itself. Make sure the Read, Write, Delete and List permissions are checked, and click Create. Then copy the value in the URL section to a temporary location. It should have the form: https://<storage account>.blob.core.windows.net/<container name>?<SAS value>.

    SAS URL retrieval

Request

curl -i -X POST "https://{endpoint}/formrecognizer/v2.1/custom/models" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ 'source': '{SAS URL}', 'useLabelFile':true}"

Location

You'll receive a 201 (Success) response with a Location header. The value of this header contains a model ID for the newly trained model that you can use to query the status of the operation and get the results:

https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}

In the following example, as part of the URL, the string after models/ is the model ID.

https://westus.api.cognitive.microsoft.com/formrecognizer/v2.1/custom/models/4da0bf8e-5325-467c-93bb-9ff13d5f72a2

Get training results

After you've started the train operation, use Get Custom Model to check the training status. Pass the model ID into the API request to check the training status:

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription key.
  2. Replace {subscription key} with your subscription key
  3. Replace {model ID} with the model ID you received in the previous step

Request

curl -X GET "https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}"

Analyze forms with a custom model

Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the Analyze Form API by running the following cURL command. Before you run the command, make these changes:

  1. Replace {endpoint} with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource Overview tab.
  2. Replace {model ID} with the model ID that you received in the previous section.
  3. Replace {SAS URL} with an SAS URL to your file in Azure storage. Follow the steps in the Training section, but instead of getting a SAS URL for the whole blob container, get one for the specific file you want to analyze.
  4. Replace {subscription key} with your subscription key.

Request

curl -v "https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}/analyze?includeTextDetails=true" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" -d "{ 'source': '{SAS URL}' } "

You'll receive a 202 (Success) response with an Operation-Location header. The value of this header includes a result ID you'll use to track the results of the Analyze operation:

https://cognitiveservice/formrecognizer/v2.1/custom/models/{modelId}/analyzeResults/{resultId}

In the following example, as part of the URL, the string after analyzeResults/ is the result ID.

https://cognitiveservice/formrecognizer/v2/layout/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb

Save this results ID for the next step.

Get the Analyze results

Call the Get Analyze Form Result API to query the results of the Analyze operation.

  1. Replace {endpoint} with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource Overview tab.
  2. Replace {result ID} with the ID that you received in the previous section.
  3. Replace {subscription key} with your subscription key.

Request

curl -X GET "https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}/analyzeResults/{resultId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

You'll receive a 200 (Success) response with a JSON body in the following format. The output has been shortened for simplicity. Notice the "status" field near the bottom. This will have the value "succeeded" when the Analyze operation is complete. If the Analyze operation hasn't completed, you'll need to query the service again by rerunning the command. We recommend an interval of one second or more between calls.

In custom models trained without labels, the key/value pair associations and tables are in the "pageResults" node of the JSON output. In custom models trained with labels, the key/value pair associations are in the "documentResults" node. If you also specified plain text extraction through the includeTextDetails URL parameter, then the "readResults" node will show the content and positions of all the text in the document.

This sample JSON output has been shortened for simplicity. See the full sample output on GitHub.

Response body

{
  "status": "succeeded",
  "createdDateTime": "2020-08-21T01:13:28Z",
  "lastUpdatedDateTime": "2020-08-21T01:13:42Z",
  "analyzeResult": {
    "version": "2.1.0",
    "readResults": [
      {
        "page": 1,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": [
          {
            "text": "Project Statement",
            "boundingBox": [
              5.0444,
              0.3613,
              8.0917,
              0.3613,
              8.0917,
              0.6718,
              5.0444,
              0.6718
            ],
            "words": [
              {
                "text": "Project",
                "boundingBox": [
                  5.0444,
                  0.3587,
                  6.2264,
                  0.3587,
                  6.2264,
                  0.708,
                  5.0444,
                  0.708
                ]
              },
              {
                "text": "Statement",
                "boundingBox": [
                  6.3361,
                  0.3635,
                  8.0917,
                  0.3635,
                  8.0917,
                  0.6396,
                  6.3361,
                  0.6396
                ]
              }
            ]
          },
          ...
        ]
      }
    ],
    "pageResults": [
      {
        "page": 1,
        "keyValuePairs": [
          {
            "key": {
              "text": "Date:",
              "boundingBox": [
                6.9833,
                1.0615,
                7.3333,
                1.0615,
                7.3333,
                1.1649,
                6.9833,
                1.1649
              ],
              "elements": [
                "#/readResults/0/lines/2/words/0"
              ]
            },
            "value": {
              "text": "9/10/2020",
              "boundingBox": [
                7.3833,
                1.0802,
                7.925,
                1.0802,
                7.925,
                1.174,
                7.3833,
                1.174
              ],
              "elements": [
                "#/readResults/0/lines/3/words/0"
              ]
            },
            "confidence": 1
          },
          ...
        ],
        "tables": [
          {
            "rows": 5,
            "columns": 5,
            "cells": [
              {
                "text": "Training Date",
                "rowIndex": 0,
                "columnIndex": 0,
                "boundingBox": [
                  0.6944,
                  4.2779,
                  1.5625,
                  4.2779,
                  1.5625,
                  4.4005,
                  0.6944,
                  4.4005
                ],
                "confidence": 1,
                "rowSpan": 1,
                "columnSpan": 1,
                "elements": [
                  "#/readResults/0/lines/15/words/0",
                  "#/readResults/0/lines/15/words/1"
                ],
                "isHeader": true,
                "isFooter": false
              },
              ...
            ]
          }
        ],
        "clusterId": 0
      }
    ],
    "documentResults": [],
    "errors": []
  }
}

Improve results

Examine the "confidence" values for each key/value result under the "pageResults" node. You should also look at the confidence scores in the "readResults" node, which correspond to the text read operation. The confidence of the read results does not affect the confidence of the key/value extraction results, so you should check both.

  • If the confidence scores for the read operation are low, try to improve the quality of your input documents (see Input requirements).
  • If the confidence scores for the key/value extraction operation are low, ensure that the documents being analyzed are of the same type as documents used in the training set. If the documents in the training set have variations in appearance, consider splitting them into different folders and training one model for each variation.

The confidence scores you target will depend on your use case, but generally it's a good practice to target a score of 80% or above. For more sensitive cases, like reading medical records or billing statements, a score of 100% is recommended.

Manage custom models

Get a list of custom models

Use the List Custom Models API in the following command to return a list of all the custom models that belong to your subscription.

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {subscription key} with the subscription key you copied from the previous step.

Request

curl -v -X GET "https://{endpoint}/formrecognizer/v2.1/custom/models?op=full"
-H "Ocp-Apim-Subscription-Key: {subscription key}"

Response body

You'll receive a 200 success response, with JSON data like the following. The "modelList" element contains all of your created models and their information.

{
  "summary": {
    "count": 0,
    "limit": 0,
    "lastUpdatedDateTime": "string"
  },
  "modelList": [
    {
      "modelId": "string",
      "status": "creating",
      "createdDateTime": "string",
      "lastUpdatedDateTime": "string"
    }
  ],
  "nextLink": "string"
}

Get a specific model

To retrieve detailed information about a specific custom model, use the Get Custom Model API in the following command.

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {subscription key} with the subscription key you copied from the previous step.
  3. Replace {modelId} with the ID of the custom model you want to look up.

Request

curl -v -X GET "https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

Request body

You'll receive a 200 success response, with JSON data like the following.

{
  "modelInfo": {
    "modelId": "string",
    "status": "creating",
    "createdDateTime": "string",
    "lastUpdatedDateTime": "string"
  },
  "keys": {
    "clusters": {}
  },
  "trainResult": {
    "trainingDocuments": [
      {
        "documentName": "string",
        "pages": 0,
        "errors": [
          "string"
        ],
        "status": "succeeded"
      }
    ],
    "fields": [
      {
        "fieldName": "string",
        "accuracy": 0.0
      }
    ],
    "averageModelAccuracy": 0.0,
    "errors": [
      {
        "message": "string"
      }
    ]
  }
}

Delete a model from the resource account

You can also delete a model from your account by referencing its ID. This command calls the Delete Custom Model API to delete the model used in the previous section.

  1. Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription.
  2. Replace {subscription key} with the subscription key you copied from the previous step.
  3. Replace {modelId} with the ID of the custom model you want to look up.

Request

curl -v -X DELETE "https://{endpoint}/formrecognizer/v2.1/custom/models/{modelId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

You'll receive a 204 success response, indicating that your model is marked for deletion. Model artifacts will be removed within 48 hours.

Next steps

In this quickstart, you used the Form Recognizer REST API to train models and analyze forms in different ways. Next, explore the reference documentation to learn about Form Recognizer API in more depth.