OCR cognitive skill

Optical character recognition (OCR) skill recognizes printed and handwritten text in image files. This skill uses the machine learning models provided by Computer Vision in Cognitive Services. The OCR skill maps to the following functionality:

  • When textExtractionAlgorithm is set to "handwritten", the "RecognizeText" functionality is used.
  • When textExtractionAlgorithm is set to "printed", the "OCR" functionality is used for languages other than English. For English, the new "Recognize Text" functionality for printed text is used.

The OCR skill extracts text from image files. Supported file formats include:

  • .JPEG
  • .JPG
  • .PNG
  • .BMP
  • .GIF
  • .TIFF

Note

Starting December 21, 2018, you can attach a Cognitive Services resource with an Azure Search skillset. This allows us to start charging for skillset execution. On this date, we also began charging for image extraction as part of the document-cracking stage. Text extraction from documents continues to be offered at no additional cost.

Built-in cognitive skill execution is charged at the Cognitive Services pay-as-you go price, at the same rate as if you had performed the task directly. Image extraction is an Azure Search charge, currently offered at preview pricing. For details, see the Azure Search pricing page or How billing works.

Skill parameters

Parameters are case-sensitive.

Parameter name Description
detectOrientation Enables autodetection of image orientation.
Valid values: true / false.
defaultLanguageCode

Language code of the input text. Supported languages include:
zh-Hans (ChineseSimplified)
zh-Hant (ChineseTraditional)
cs (Czech)
da (Danish)
nl (Dutch)
en (English)
fi (Finnish)
fr (French)
de (German)
el (Greek)
hu (Hungarian)
it (Italian)
ja (Japanese)
ko (Korean)
nb (Norwegian)
pl (Polish)
pt (Portuguese)
ru (Russian)
es (Spanish)
sv (Swedish)
tr (Turkish)
ar (Arabic)
ro (Romanian)
sr-Cyrl (SerbianCyrillic)
sr-Latn (SerbianLatin)
sk (Slovak).
unk (Unknown)

If the language code is unspecified or null, the language will be set to English. If the language is explicitly set to "unk", the language will be auto-detected.

textExtractionAlgorithm "printed" or "handwritten". The "handwritten" text recognition OCR algorithm is currently in preview and only supported in English.

Skill inputs

Input name Description
image Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when imageAction is set to a value other than none. See the sample for more information.

Skill outputs

Output name Description
text Plain text extracted from the image.
layoutText Complex type that describes the extracted text and the location where the text was found.

Sample definition

{
  "skills": [
    {
      "description": "Extracts text (plain and structured) from image.",
      "@odata.type": "#Microsoft.Skills.Vision.OcrSkill",
      "context": "/document/normalized_images/*",
      "defaultLanguageCode": null,
      "detectOrientation": true,
      "inputs": [
        {
          "name": "image",
          "source": "/document/normalized_images/*"
        }
      ],
      "outputs": [
        {
          "name": "text",
          "targetName": "myText"
        },
        {
          "name": "layoutText",
          "targetName": "myLayoutText"
        }
      ]
    }
  ]
}

Sample text and layoutText output

{
  "text": "Hello World. -John",
  "layoutText":
  {
    "language" : "en",
    "text" : "Hello World. -John",
    "lines" : [
      {
        "boundingBox":
        [ {"x":10, "y":10}, {"x":50, "y":10}, {"x":50, "y":30},{"x":10, "y":30}],
        "text":"Hello World."
      },
      {
        "boundingBox": [ {"x":110, "y":10}, {"x":150, "y":10}, {"x":150, "y":30},{"x":110, "y":30}],
        "text":"-John"
      }
    ],
    "words": [
      {
        "boundingBox": [ {"x":110, "y":10}, {"x":150, "y":10}, {"x":150, "y":30},{"x":110, "y":30}],
        "text":"Hello"
      },
      {
        "boundingBox": [ {"x":110, "y":10}, {"x":150, "y":10}, {"x":150, "y":30},{"x":110, "y":30}],
        "text":"World."
      },
      {
        "boundingBox": [ {"x":110, "y":10}, {"x":150, "y":10}, {"x":150, "y":30},{"x":110, "y":30}],
        "text":"-John"
      }
    ]
  }
}

Sample: Merging text extracted from embedded images with the content of the document.

A common use case for Text Merger is the ability to merge the textual representation of images (text from an OCR skill, or the caption of an image) into the content field of a document.

The following example skillset creates a merged_text field. This field contains the textual content of your document and the OCRed text from each of the images embedded in that document.

Request Body Syntax

{
  "description": "Extract text from images and merge with content text to produce merged_text",
  "skills":
  [
    {
      "description": "Extract text (plain and structured) from image.",
      "@odata.type": "#Microsoft.Skills.Vision.OcrSkill",
      "context": "/document/normalized_images/*",
      "defaultLanguageCode": "en",
      "detectOrientation": true,
      "inputs": [
        {
          "name": "image",
          "source": "/document/normalized_images/*"
        }
      ],
      "outputs": [
        {
          "name": "text"
        }
      ]
    },
    {
      "@odata.type": "#Microsoft.Skills.Text.MergeSkill",
      "description": "Create merged_text, which includes all the textual representation of each image inserted at the right location in the content field.",
      "context": "/document",
      "insertPreTag": " ",
      "insertPostTag": " ",
      "inputs": [
        {
          "name":"text", "source": "/document/content"
        },
        {
          "name": "itemsToInsert", "source": "/document/normalized_images/*/text"
        },
        {
          "name":"offsets", "source": "/document/normalized_images/*/contentOffset"
        }
      ],
      "outputs": [
        {
          "name": "mergedText", "targetName" : "merged_text"
        }
      ]
    }
  ]
}

The above skillset example assumes that a normalized-images field exists. To generate this field, set the imageAction configuration in your indexer definition to generateNormalizedImages as shown below:

{
  //...rest of your indexer definition goes here ...
  "parameters": {
    "configuration": {
      "dataToExtract":"contentAndMetadata",
      "imageAction":"generateNormalizedImages"
    }
  }
}

See also