Build a cognitive search AI-enriched pipeline
Demonstrates AI enrichment by building a cognitive search indexing pipeline that detects and extracts text and text representations of images and scanned documents stored as blobs in Azure Blob storage. This sample leverages cognitive skills from the Azure Cognitive Services API, such as entity recognition and language detection. It uses the REST APIs to make calls to Azure Search, including index definition, data ingestion and AI enrichment, and query execution.
This sample is a Jupyter Python3 .ipynb file used in Python Tutorial: Call Cognitive Services APIs in an Azure Search indexing pipeline.
||Jupyter Python notebook.|
||Define what to ignore at commit time.|
||Guidelines for contributing to the sample.|
||This README file.|
||The license for the sample.|
- Anaconda 3.x providing Python 3.x and Jupyter Notebooks
- Sample file set (mixed content types)
- Azure storage account
- Azure Search service
- Clone or download this sample repository.
- Extract contents if the download is a zip file. Make sure the files are read-write.
Running the sample
- On the Windows Start menu, select Anaconda3, and then select Jupyter Notebook.
- Open the PythonTutorial-AzureSearch-AIEnrichment.ipynb file in Jupyter Notebook.
and with the service and api-key details of your Azure Search service.
with a connection string to an Azure Blob storage resource that you created, and to which you uploaded content files of various file types.
- Run each step individually.
By sequentially executing each step, you can verify the printed response status or response output appears before continuing to the next step. The step that creates the indexer, in particular, may take a few minutes to complete. See the tutorial for more details.