Quickstart: Face client library for Python

Get started with the Face client library for Python. Follow these steps to install the package and try out the example code for basic tasks. The Face API service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

Use the Face client library for Python to:

  • Detect faces in an image
  • Find similar faces
  • Create and train a person group
  • Identify a face
  • Verify faces
  • Take a snapshot for data migration

Reference documentation | Library source code | Package (PiPy) | Samples


Setting up

Create a Face Azure resource

Azure Cognitive Services are represented by Azure resources that you subscribe to. Create a resource for Face using the Azure portal or Azure CLI on your local machine. You can also:

After you get a key from your trial subscription or resource, create an environment variable for the key, named FACE_SUBSCRIPTION_KEY.

Create a new Python application

Create a new Python script—quickstart-file.py, for example. Then open it in your preferred editor or IDE and import the following libraries.

import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, SnapshotObjectType, OperationStatusType

Then, create variables for your resource's Azure endpoint and key. You may need to change the first part of the endpoint (westus) to match your subscription.

# Set the FACE_SUBSCRIPTION_KEY environment variable with your key as the value.
# This key will serve all examples in this document.

# Set the FACE_ENDPOINT environment variable with the endpoint from your Face service in Azure.
# This endpoint will be used in all examples in this quickstart.


If you created the environment variable after you launched the application, you will need to close and reopen the editor, IDE, or shell running it to access the variable.

Install the client library

You can install the client library with:

pip install --upgrade azure-cognitiveservices-vision-face

Object model

The following classes and interfaces handle some of the major features of the Face Python SDK.

Name Description
FaceClient This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
FaceOperations This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFace This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face.
FaceListOperations This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonOperations This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupOperations This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.
ShapshotOperations This class manages the Snapshot functionality; you can use it to temporarily save all of your cloud-based face data and migrate that data to a new Azure subscription.

Code examples

These code snippets show you how to do the following tasks with the Face client library for Python:

Authenticate the client


This quickstart assumes you've created an environment variable for your Face key, named FACE_SUBSCRIPTION_KEY.

Instantiate a client with your endpoint and key. Create a CognitiveServicesCredentials object with your key, and use it with your endpoint to create a FaceClient object.

# Create an authenticated FaceClient.
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

Detect faces in an image

The following code detects a face in a remote image. It prints the detected face's ID to the console and also stores it in program memory. Then, it detects the faces in an image with multiple people and prints their IDs to the console as well. By changing the parameters in the detect_with_url method, you can return different information with each DetectedFace object.

# Detect a face in an image that contains a single face
single_face_image_url = 'https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
    raise Exception('No face detected from image {}'.format(single_image_name))

# Display the detected face ID in the first single-face image.
# Face IDs are used for comparison to faces (their IDs) detected in other images.
print('Detected face ID from', single_image_name, ':')
for face in detected_faces: print (face.face_id)

# Save this ID for use in Find Similar
first_image_face_ID = detected_faces[0].face_id

See the sample code on GitHub for more detection scenarios.

Display and frame faces

The following code outputs the given image to the display and draws rectangles around the faces, using the DetectedFace.faceRectangle property.

# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
    raise Exception('No face detected from image {}'.format(single_image_name))

# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
    rect = faceDictionary.face_rectangle
    left = rect.left
    top = rect.top
    bottom = left + rect.height
    right = top + rect.width
    return ((left, top), (bottom, right))

# Download the image from the url
response = requests.get(single_face_image_url)
img = Image.open(BytesIO(response.content))

# For each face returned use the face rectangle and draw a red box.
print('Drawing rectangle around face... see popup for results.')
draw = ImageDraw.Draw(img)
for face in detected_faces:
    draw.rectangle(getRectangle(face), outline='red')

# Display the image in the users default image browser.

A young woman with a red rectangle drawn around the face

Find similar faces

The following code takes a single detected face and searches a set of other faces to find matches. When it finds a match, it prints the rectangle coordinates of the matched face to the console.

Find matches

First, run the code in the above section (Detect faces in an image) to save a reference to a single face. Then run the following code to get references to several faces in a group image.

# Detect the faces in an image that contains multiple faces
# Each detected face gets assigned a new ID
multi_face_image_url = "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
multi_image_name = os.path.basename(multi_face_image_url)
detected_faces2 = face_client.face.detect_with_url(url=multi_face_image_url)

Then add the following code block to find instances of the first face in the group. See the find_similar method to learn how to modify this behavior.

# Search through faces detected in group image for the single face from first image.
# First, create a list of the face IDs found in the second image.
second_image_face_IDs = list(map(lambda x: x.face_id, detected_faces2))
# Next, find similar face IDs like the one detected in the first image.
similar_faces = face_client.face.find_similar(face_id=first_image_face_ID, face_ids=second_image_face_IDs)
if not similar_faces[0]:
    print('No similar faces found in', multi_image_name, '.')

Use the following code to print the match details to the console.

# Print the details of the similar faces detected
print('Similar faces found in', multi_image_name + ':')
for face in similar_faces:
    first_image_face_ID = face.face_id
    # The similar face IDs of the single face image and the group image do not need to match, they are only used for identification purposes in each image.
    # The similar faces are matched using the Cognitive Services algorithm in find_similar().
    face_info = next(x for x in detected_faces2 if x.face_id == first_image_face_ID)
    if face_info:
        print('  Face ID: ', first_image_face_ID)
        print('  Face rectangle:')
        print('    Left: ', str(face_info.face_rectangle.left))
        print('    Top: ', str(face_info.face_rectangle.top))
        print('    Width: ', str(face_info.face_rectangle.width))
        print('    Height: ', str(face_info.face_rectangle.height))

Create and train a person group

The following code creates a PersonGroup with three different Person objects. It associates each Person with a set of example images, and then it trains to be able to recognize each person.

Create PersonGroup

To step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

This group of images contains three sets of face images corresponding to three different people. The code will define three Person objects and associate them with image files that start with woman, man, and child.

Once you've set up your images, define a label at the top of your script for the PersonGroup object you'll create.

# Used in the Person Group Operations,  Snapshot Operations, and Delete Person Group examples.
# You can call list_person_groups to print a list of preexisting PersonGroups.
# SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
PERSON_GROUP_ID = 'my-unique-person-group'

# Used for the Snapshot and Delete Person Group examples.
TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)

Then add the following code to the bottom of your script. This code creates a PersongGroup and three Person objects.

Create the PersonGroup
# Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
print('Person group:', PERSON_GROUP_ID)
face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)

# Define woman friend
woman = face_client.person_group_person.create(PERSON_GROUP_ID, "Woman")
# Define man friend
man = face_client.person_group_person.create(PERSON_GROUP_ID, "Man")
# Define child friend
child = face_client.person_group_person.create(PERSON_GROUP_ID, "Child")

Assign faces to Persons

The following code sorts your images by their prefix, detects faces, and assigns the faces to each Person object.

Detect faces and register to correct person
# Find all jpeg images of friends in working directory
woman_images = [file for file in glob.glob('*.jpg') if file.startswith("woman")]
man_images = [file for file in glob.glob('*.jpg') if file.startswith("man")]
child_images = [file for file in glob.glob('*.jpg') if file.startswith("child")]

# Add to a woman person
for image in woman_images:
    w = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, woman.person_id, w)

# Add to a man person
for image in man_images:
    m = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, man.person_id, m)

# Add to a child person
for image in child_images:
    ch = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, child.person_id, ch)

Train PersonGroup

Once you've assigned faces, you must train the PersonGroup so that it can identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the result, printing the status to the console.

Train PersonGroup
print('Training the person group...')
# Train the person group

while (True):
    training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
    print("Training status: {}.".format(training_status.status))
    if (training_status.status is TrainingStatusType.succeeded):
    elif (training_status.status is TrainingStatusType.failed):
        sys.exit('Training the person group has failed.')

Identify a face

The following code takes an image with multiple faces and looks to find the identity of each person in the image. It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.


In order to run this example, you must first run the code in Create and train a person group.

Get a test image

The following code looks in the root of your project for an image test-image-person-group.jpg and detects the faces in the image. You can find this image with the images used for PersonGroup management: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

Identify a face against a defined PersonGroup
# Group image for testing against
group_photo = 'test-image-person-group.jpg'
IMAGES_FOLDER = os.path.join(os.path.dirname(os.path.realpath(__file__)))
# Get test image
test_image_array = glob.glob(os.path.join(IMAGES_FOLDER, group_photo))
image = open(test_image_array[0], 'r+b')

# Detect faces
face_ids = []
faces = face_client.face.detect_with_stream(image)
for face in faces:

Identify faces

The identify method takes an array of detected faces and compares them to a PersonGroup. If it can match a detected face to a Person, it saves the result. This code prints detailed match results to the console.

# Identify faces
results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
print('Identifying faces in {}'.format(os.path.basename(image.name)))
if not results:
    print('No person identified in the person group for faces from {}.'.format(os.path.basename(image.name)))
for person in results:
    print('Person for face ID {} is identified in {} with a confidence of {}.'.format(person.face_id, os.path.basename(image.name), person.candidates[0].confidence)) # Get topmost confidence score

Verify faces

The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.

The following code detects faces in two source images and then verifies them against a face detected from a target image.

Get test images

The following code blocks declare variables that will point to the source and target images for the verification operation.

# Base url for the Verify and Facelist/Large Facelist operations
IMAGE_BASE_URL = 'https://csdx.blob.core.windows.net/resources/Face/Images/'
# Create a list to hold the target photos of the same person
target_image_file_names = ['Family1-Dad1.jpg', 'Family1-Dad2.jpg']
# The source photos contain this person
source_image_file_name1 = 'Family1-Dad3.jpg'
source_image_file_name2 = 'Family1-Son1.jpg'

Detect faces for verification

The following code detects faces in the source and target images and saves them to variables.

# Detect face(s) from source image 1, returns a list[DetectedFaces]
detected_faces1 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name1)
# Add the returned face's face ID
source_image1_id = detected_faces1[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces1), source_image_file_name1))

# Detect face(s) from source image 2, returns a list[DetectedFaces]
detected_faces2 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name2)
# Add the returned face's face ID
source_image2_id = detected_faces2[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces2), source_image_file_name2))

# List for the target face IDs (uuids)
detected_faces_ids = []
# Detect faces from target image url list, returns a list[DetectedFaces]
for image_file_name in target_image_file_names:
    detected_faces = face_client.face.detect_with_url(IMAGE_BASE_URL + image_file_name)
    # Add the returned face's face ID
    print('{} face(s) detected from image {}.'.format(len(detected_faces), image_file_name))

Get verification results

The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.

# Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
# Since target faces are the same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_same = face_client.face.verify_face_to_face(source_image1_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
    .format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence)
    if verify_result_same.is_identical
    else 'Faces from {} & {} are of a different person, with confidence: {}'
        .format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence))

# Verification example for faces of different persons.
# Since target faces are same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_diff = face_client.face.verify_face_to_face(source_image2_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
    .format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence)
    if verify_result_diff.is_identical
    else 'Faces from {} & {} are of a different person, with confidence: {}'
        .format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence))

Take a snapshot for data migration

The Snapshots feature lets you move your saved face data, such as a trained PersonGroup, to a different Azure Cognitive Services Face subscription. You may want to use this feature if, for example, you've created a PersonGroup object using a free trial subscription and now want to migrate it to a paid subscription. See the Migrate your face data for a broad overview of the Snapshots feature.

In this example, you will migrate the PersonGroup you created in Create and train a person group. You can either complete that section first, or use your own Face data construct(s).

Set up target subscription

First, you must have a second Azure subscription with a Face resource; you can do this by following the steps in the Setting up section.

Then, create the following variables near the top of your script. You'll also need to create new environment variables for the subscription ID of your Azure account, as well as the key, endpoint, and subscription ID of your new (target) account.

Snapshot operations variables
These are only used for the snapshot example. Set your environment variables accordingly.
# Source endpoint, the location/subscription where the original person group is located.
# Source subscription key. Must match the source endpoint region.
# Source subscription ID. Found in the Azure portal in the Overview page of your Face (or any) resource.
# Person group name that will get created in this quickstart's Person Group Operations example.
# Target endpoint. This is your 2nd Face subscription.
# Target subscription key. Must match the target endpoint region.
# Target subscription ID. It will be the same as the source ID if created Face resources from the same subscription (but moving from region to region). If they are differnt subscriptions, add the other target ID here.
# NOTE: We do not need to specify the target PersonGroup ID here because we generate it with this example.
# Each new location you transfer a person group to will have a generated, new person group ID for that region.

Authenticate target client

Later in your script, save your current client object as the source client, and then authenticate a new client object for your target subscription.

# Use your source client already created (it has the person group ID you need in it).
face_client_source = face_client
# Create a new FaceClient instance for your target with authentication.
face_client_target = FaceClient(TARGET_ENDPOINT, CognitiveServicesCredentials(TARGET_KEY))

Use a snapshot

The rest of the snapshot operations take place within an asynchronous function.

  1. The first step is to take the snapshot, which saves your original subscription's face data to a temporary cloud location. This method returns an ID that you use to query the status of the operation.

    Snapshot operations in 4 steps
    async def run():
        # STEP 1, take a snapshot of your person group, then track status.
        # This list must include all subscription IDs from which you want to access the snapshot.
        source_list = [SOURCE_ID, TARGET_ID]
        # You may have many sources, if transferring from many regions
        # remove any duplicates from the list. Passing the same subscription ID more than once causes
        # the Snapshot.take operation to fail.
        source_list = list(dict.fromkeys(source_list))
        # Note Snapshot.take is not asynchronous.
        # For information about Snapshot.take see:
        # https://github.com/Azure/azure-sdk-for-python/blob/master/azure-cognitiveservices-vision-face/azure/cognitiveservices/vision/face/operations/snapshot_operations.py#L36
        take_snapshot_result = face_client_source.snapshot.take(
            # Set this to tell Snapshot.take to return the response; otherwise it returns None.
        # Get operation ID from response for tracking
        # Snapshot.type return value is of type msrest.pipeline.ClientRawResponse. See:
        # https://docs.microsoft.com/en-us/python/api/msrest/msrest.pipeline.clientrawresponse?view=azure-python
        take_operation_id = take_snapshot_result.response.headers['Operation-Location'].replace('/operations/', '')
        print('Taking snapshot( operation ID:', take_operation_id, ')...')
  2. Next, query the ID until the operation has completed.

    # STEP 2, Wait for snapshot taking to complete.
    take_status = await wait_for_operation(face_client_source, take_operation_id)
    # Get snapshot id from response.
    snapshot_id = take_status.resource_location.replace ('/snapshots/', '')
    print('Snapshot ID:', snapshot_id)
    print('Taking snapshot... Done\n')

    This code makes use of the wait_for_operation function, which you should define separately:

    # Helper function that waits and checks status of API call processing.
    async def wait_for_operation(client, operation_id):
        # Track progress of taking the snapshot.
        # Note Snapshot.get_operation_status is not asynchronous.
        # For information about Snapshot.get_operation_status see:
        # https://github.com/Azure/azure-sdk-for-python/blob/master/azure-cognitiveservices-vision-face/azure/cognitiveservices/vision/face/operations/snapshot_operations.py#L466
        result = client.snapshot.get_operation_status(operation_id=operation_id)
        status = result.status.lower()
        print('Operation status:', status)
        if ('notstarted' == status or 'running' == status):
            print("Waiting 10 seconds...")
            await asyncio.sleep(10)
            result = await wait_for_operation(client, operation_id)
        elif ('failed' == status):
            raise Exception("Operation failed. Reason:" + result.message)
        return result
  3. Go back to your asynchronous function. Use the apply operation to write your face data to your target subscription. This method also returns an ID.

    # STEP 3, apply the snapshot to target region(s)
    # Snapshot.apply is not asynchronous.
    # For information about Snapshot.apply see:
    # https://github.com/Azure/azure-sdk-for-python/blob/master/azure-cognitiveservices-vision-face/azure/cognitiveservices/vision/face/operations/snapshot_operations.py#L366
    apply_snapshot_result = face_client_target.snapshot.apply(
        # Generate a new UUID for the target person group ID.
        # Set this to tell Snapshot.apply to return the response; otherwise it returns None.
    apply_operation_id = apply_snapshot_result.response.headers['Operation-Location'].replace('/operations/', '')
    print('Applying snapshot( operation ID:', apply_operation_id, ')...')
  4. Again, use the wait_for_operation function to query the ID until the operation has completed.

    # STEP 4, wait for applying snapshot process to complete.
    await wait_for_operation(face_client_target, apply_operation_id)
    print('Applying snapshot... Done\n')
    print('End of transfer.')

Once you've completed these steps, you'll be able to access your face data constructs from your new (target) subscription.

Run the application

Run the application with the python command on your quickstart file.

python quickstart-file.py

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your script:

# Delete the main person group.
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))

If you migrated data using the Snapshot feature in this quickstart, you'll also need to delete the PersonGroup saved to the target subscription.

# Delete the person group in the target region.
print("Deleted the person group {} from the target location.".format(TARGET_PERSON_GROUP_ID))

Next steps

In this quickstart, you learned how to use the Face library for Python to do basis tasks. Next, explore the reference documentation to learn more about the library.