Quickstart: Use the Face client library

Get started with facial recognition using the Face client library for .NET. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (NuGet) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The Visual Studio IDE or current version of .NET Core.
  • Your Azure account must have a Cognitive Services Contributor role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the Assign roles documentation, or contact your administrator.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Identify faces

  1. Create a new C# application

    Using Visual Studio, create a new .NET Core application.

    Install the client library

    Once you've created a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, check Include prerelease, and search for Microsoft.Azure.CognitiveServices.Vision.Face. Select version 2.7.0-preview.1, and then Install.

  2. Add the following code into the Program.cs file.

    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Linq;
    using System.Threading;
    using System.Threading.Tasks;
    
    using Microsoft.Azure.CognitiveServices.Vision.Face;
    using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
    
    namespace FaceQuickstart
    {
        class Program
        {
            static string personGroupId = Guid.NewGuid().ToString();
    
            // URL path for the images.
            const string IMAGE_BASE_URL = "https://csdx.blob.core.windows.net/resources/Face/Images/";
    
            // From your Face subscription in the Azure portal, get your subscription key and endpoint.
            const string SUBSCRIPTION_KEY = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE";
            const string ENDPOINT = "PASTE_YOUR_FACE_ENDPOINT_HERE";
    
            static void Main(string[] args)
            {
                // Recognition model 4 was released in 2021 February.
                // It is recommended since its accuracy is improved
                // on faces wearing masks compared with model 3,
                // and its overall accuracy is improved compared
                // with models 1 and 2.
                const string RECOGNITION_MODEL4 = RecognitionModel.Recognition04;
    
                // Authenticate.
                IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
                // Identify - recognize a face(s) in a person group (a person group is created in this example).
                IdentifyInPersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4).Wait();
    
                Console.WriteLine("End of quickstart.");
            }
    
            /*
             *	AUTHENTICATE
             *	Uses subscription key and region to create a client.
             */
            public static IFaceClient Authenticate(string endpoint, string key)
            {
                return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
            }
    
            // Detect faces from image url for recognition purpose. This is a helper method for other functions in this quickstart.
            // Parameter `returnFaceId` of `DetectWithUrlAsync` must be set to `true` (by default) for recognition purpose.
            // Parameter `FaceAttributes` is set to include the QualityForRecognition attribute. 
            // Recognition model must be set to recognition_03 or recognition_04 as a result.
            // Result faces with insufficient quality for recognition are filtered out. 
            // The field `faceId` in returned `DetectedFace`s will be used in Face - Find Similar, Face - Verify. and Face - Identify.
            // It will expire 24 hours after the detection call.
            private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string recognition_model)
            {
                // Detect faces from image URL. Since only recognizing, use the recognition model 1.
                // We use detection model 3 because we are not retrieving attributes.
                IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: recognition_model, detectionModel: DetectionModel.Detection03, FaceAttributes: new List<FaceAttributeType> { FaceAttributeType.QualityForRecognition });
                List<DetectedFace> sufficientQualityFaces = new List<DetectedFace>();
                foreach (DetectedFace detectedFace in detectedFaces){
                    var faceQualityForRecognition = detectedFace.FaceAttributes.QualityForRecognition;
                    if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value >= QualityForRecognition.Medium)){
                        sufficientQualityFaces.Add(detectedFace);
                    }
                }
                Console.WriteLine($"{detectedFaces.Count} face(s) with {sufficientQualityFaces.Count} having sufficient quality for recognition detected from image `{Path.GetFileName(url)}`");
    
                return sufficientQualityFaces.ToList();
            }
    
            /*
             * IDENTIFY FACES
             * To identify faces, you need to create and define a person group.
             * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a PersonGroup and returns 
             * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects, 
             * which have a prediction confidence value.
             */
            public static async Task IdentifyInPersonGroup(IFaceClient client, string url, string recognitionModel)
            {
                Console.WriteLine("========IDENTIFY FACES========");
                Console.WriteLine();
    
                // Create a dictionary for all your images, grouping similar ones under the same key.
                Dictionary<string, string[]> personDictionary =
                    new Dictionary<string, string[]>
                        { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
                          { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
                          { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } },
                          { "Family1-Daughter", new[] { "Family1-Daughter1.jpg", "Family1-Daughter2.jpg" } },
                          { "Family2-Lady", new[] { "Family2-Lady1.jpg", "Family2-Lady2.jpg" } },
                          { "Family2-Man", new[] { "Family2-Man1.jpg", "Family2-Man2.jpg" } }
                        };
                // A group photo that includes some of the persons you seek to identify from your dictionary.
                string sourceImageFileName = "identification1.jpg";
    
                // Create a person group. 
                Console.WriteLine($"Create a person group ({personGroupId}).");
                await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
                // The similar faces will be grouped into a single person group person.
                foreach (var groupedFace in personDictionary.Keys)
                {
                    // Limit TPS
                    await Task.Delay(250);
                    Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
                    Console.WriteLine($"Create a person group person '{groupedFace}'.");
    
                    // Add face to the person group person.
                    foreach (var similarImage in personDictionary[groupedFace])
                    {
                        Console.WriteLine($"Check whether image is of sufficient quality for recognition");
                        IList<DetectedFace> detectedFaces = await client.Face.DetectWithUrlAsync($"{url}{similarImage}", 
                            recognitionModel: recognition_model, 
                            detectionModel: DetectionModel.Detection03,
                            returnFaceAttributes: new List<FaceAttributeType> { FaceAttributeType.QualityForRecognition });
                        bool sufficientQuality = true;
                        foreach (var face in detectedFaces)
                        {
                            var faceQualityForRecognition = face.FaceAttributes.QualityForRecognition;
                            //  Only "high" quality images are recommended for person enrollment
                            if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.High)){
                                sufficientQuality = false;
                                break;
                            }
                        }
    
                        if (!sufficientQuality){
                            continue;
                        }
    
    
                        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
                        PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
                            $"{url}{similarImage}", similarImage);
                    }
                }
    
                // Start to train the person group.
                Console.WriteLine();
                Console.WriteLine($"Train person group {personGroupId}.");
                await client.PersonGroup.TrainAsync(personGroupId);
    
                // Wait until the training is completed.
                while (true)
                {
                    await Task.Delay(1000);
                    var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
                    Console.WriteLine($"Training status: {trainingStatus.Status}.");
                    if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
                }
                Console.WriteLine();
    
                List<Guid> sourceFaceIds = new List<Guid>();
                // Detect faces from source image url.
                List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
    
                // Add detected faceId to sourceFaceIds.
                foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
                
                // Identify the faces in a person group. 
                var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);
    
                foreach (var identifyResult in identifyResults)
                {
                    if (identifyResult.Candidates.Count==0) {
                        Console.WriteLine($"No person is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId},");
                        continue;
                    }
                    Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates[0].PersonId);
                    Console.WriteLine($"Person '{person.Name}' is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId}," +
                        $" confidence: {identifyResult.Candidates[0].Confidence}.");
                }
                Console.WriteLine();
            }
        }
    }
    
  3. Enter your key and endpoint in the corresponding fields.

    Important

    Go to the Azure portal. If the Face resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

    Important

    Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services security article for more information.

  4. Run the application

    Run the application by clicking the Debug button at the top of the IDE window.

Output

========IDENTIFY FACES========

Create a person group (3972c063-71b3-4328-8579-6d190ee76f99).
Create a person group person 'Family1-Dad'.
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`
Create a person group person 'Family1-Daughter'.
Create a person group person 'Family2-Lady'.
Add face to the person group person(Family2-Lady) from image `Family2-Lady1.jpg`
Add face to the person group person(Family2-Lady) from image `Family2-Lady2.jpg`
Create a person group person 'Family2-Man'.
Add face to the person group person(Family2-Man) from image `Family2-Man1.jpg`
Add face to the person group person(Family2-Man) from image `Family2-Man2.jpg`

Train person group 3972c063-71b3-4328-8579-6d190ee76f99.
Training status: Succeeded.

4 face(s) with 4 having sufficient quality for recognition detected from image `identification1.jpg`
Person 'Family1-Dad' is identified for face in: identification1.jpg - 994bfd7a-0d8f-4fae-a5a6-c524664cbee7, confidence: 0.96725.
Person 'Family1-Mom' is identified for face in: identification1.jpg - 0c9da7b9-a628-429d-97ff-cebe7c638fb5, confidence: 0.96921.
No person is identified for face in: identification1.jpg - a881259c-e811-4f7e-a35e-a453e95ca18f,
Person 'Family1-Son' is identified for face in: identification1.jpg - 53772235-8193-46eb-bdfc-1ebc25ea062e, confidence: 0.92886.

End of quickstart.

Tip

The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

To delete the PersonGroup you created in this quickstart, run the following code in your program:

// At end, delete person groups in both regions (since testing only)
Console.WriteLine("========DELETE PERSON GROUP========");
Console.WriteLine();
DeletePersonGroup(client, personGroupId).Wait();

Define the deletion method with the following code:

/*
 * DELETE PERSON GROUP
 * After this entire example is executed, delete the person group in your Azure account,
 * otherwise you cannot recreate one with the same name (if running example repeatedly).
 */
public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
{
    await client.PersonGroup.DeleteAsync(personGroupId);
    Console.WriteLine($"Deleted the person group {personGroupId}.");
}

Next steps

In this quickstart, you learned how to use the Face client library for .NET to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Quickstart: Face client library for JavaScript

Get started with facial recognition using the Face client library for JavaScript. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (npm) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The latest version of Node.js
  • Your Azure account must have a Cognitive Services Contributor role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the Assign roles documentation, or contact your administrator.
  • Once you have your Azure subscription, Create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Identify faces

  1. Create a new Node.js application

    In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

    mkdir myapp && cd myapp
    

    Run the npm init command to create a node application with a package.json file.

    npm init
    
  2. Install the ms-rest-azure and azure-cognitiveservices-face NPM packages:

    npm install @azure/cognitiveservices-face @azure/ms-rest-js uuid
    

    Your app's package.json file will be updated with the dependencies.

  3. Create a file named index.js, open it in a text editor, and paste in the following code:

    'use strict';
    
    const msRest = require("@azure/ms-rest-js");
    const Face = require("@azure/cognitiveservices-face");
    const { v4: uuid } = require('uuid');
    
    key = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE";
    endpoint = "PASTE_YOUR_FACE_ENDPOINT_HERE";
    
    const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
    const client = new Face.FaceClient(credentials, endpoint);
    
    
    const image_base_url = "https://csdx.blob.core.windows.net/resources/Face/Images/";
    const person_group_id = uuid();
    
    function sleep(ms) {
        return new Promise(resolve => setTimeout(resolve, ms));
    }
    
    async function DetectFaceRecognize(url) {
        // Detect faces from image URL. Since only recognizing, use the recognition model 4.
        // We use detection model 3 because we are only retrieving the qualityForRecognition attribute.
        // Result faces with quality for recognition lower than "medium" are filtered out.
        let detected_faces = await client.face.detectWithUrl(url,
            {
                detectionModel: "detection_03",
                recognitionModel: "recognition_04",
                returnFaceAttributes: ["QualityForRecognition"]
            });
        return detected_faces.filter(face => face.faceAttributes.qualityForRecognition == 'high' || face.faceAttributes.qualityForRecognition == 'medium');
    }
    
    async function AddFacesToPersonGroup(person_dictionary, person_group_id) {
        console.log ("Adding faces to person group...");
        // The similar faces will be grouped into a single person group person.
        
        await Promise.all (Object.keys(person_dictionary).map (async function (key) {
            const value = person_dictionary[key];
    
            // Wait briefly so we do not exceed rate limits.
            await sleep (2000);
    
            let person = await client.personGroupPerson.create(person_group_id, { name : key });
            console.log("Create a persongroup person: " + key + ".");
    
            // Add faces to the person group person.
            await Promise.all (value.map (async function (similar_image) {
                // Check if the image is of sufficent quality for recognition.
                let sufficientQuality = true;
                let detected_faces = await client.face.detectWithUrl(image_base_url + similar_image,
                    {
                        returnFaceAttributes: ["QualityForRecognition"],
                        detectionModel: "detection_03",
                        recognitionModel: "recognition_03"
                    });
                detected_faces.forEach(detected_face => {
                    if (detected_face.faceAttributes.qualityForRecognition != 'high'){
                        sufficientQuality = false;
                    }
                });
    
                // Quality is sufficent, add to group.
                if (sufficientQuality){
                    console.log("Add face to the person group person: (" + key + ") from image: " + similar_image + ".");
                    await client.personGroupPerson.addFaceFromUrl(person_group_id, person.personId, image_base_url + similar_image);
                }
            }));
        }));
    
        console.log ("Done adding faces to person group.");
    }
    
    async function WaitForPersonGroupTraining(person_group_id) {
        // Wait so we do not exceed rate limits.
        console.log ("Waiting 10 seconds...");
        await sleep (10000);
        let result = await client.personGroup.getTrainingStatus(person_group_id);
        console.log("Training status: " + result.status + ".");
        if (result.status !== "succeeded") {
            await WaitForPersonGroupTraining(person_group_id);
        }
    }
    
    /* NOTE This function might not work with the free tier of the Face service
    because it might exceed the rate limits. If that happens, try inserting calls
    to sleep() between calls to the Face service.
    */
    async function IdentifyInPersonGroup() {
        console.log("========IDENTIFY FACES========");
        console.log();
    
    // Create a dictionary for all your images, grouping similar ones under the same key.
        const person_dictionary = {
            "Family1-Dad" : ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
            "Family1-Mom" : ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
            "Family1-Son" : ["Family1-Son1.jpg", "Family1-Son2.jpg"],
            "Family1-Daughter" : ["Family1-Daughter1.jpg", "Family1-Daughter2.jpg"],
            "Family2-Lady" : ["Family2-Lady1.jpg", "Family2-Lady2.jpg"],
            "Family2-Man" : ["Family2-Man1.jpg", "Family2-Man2.jpg"]
        };
    
        // A group photo that includes some of the persons you seek to identify from your dictionary.
        let source_image_file_name = "identification1.jpg";
    
        
        // Create a person group. 
        console.log("Creating a person group with ID: " + person_group_id);
        await client.personGroup.create(person_group_id, person_group_id, {recognitionModel : "recognition_04" });
    
        await AddFacesToPersonGroup(person_dictionary, person_group_id);
    
        // Start to train the person group.
        console.log();
        console.log("Training person group: " + person_group_id + ".");
        await client.personGroup.train(person_group_id);
    
        await WaitForPersonGroupTraining(person_group_id);
        console.log();
    
        // Detect faces from source image url and only take those with sufficient quality for recognition.
        let face_ids = (await DetectFaceRecognize(image_base_url + source_image_file_name)).map (face => face.faceId);
        // Identify the faces in a person group.
        let results = await client.face.identify(face_ids, { personGroupId : person_group_id});
        await Promise.all (results.map (async function (result) {
            let person = await client.personGroupPerson.get(person_group_id, result.candidates[0].personId);
            console.log("Person: " + person.name + " is identified for face in: " + source_image_file_name + " with ID: " + result.faceId + ". Confidence: " + result.candidates[0].confidence + ".");
        }));
        console.log();
    }
    
    async function main() {
        await IdentifyInPersonGroup();
        console.log ("Done.");
    }
    main();
    
  4. Enter your key and endpoint into the corresponding fields.

    Important

    Go to the Azure portal. If the Face resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

    Important

    Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services security article for more information.

  5. Run the application with the node command on your quickstart file.

    node index.js
    

Output

========IDENTIFY FACES========

Creating a person group with ID: c08484e0-044b-4610-8b7e-c957584e5d2d
Adding faces to person group...
Create a persongroup person: Family1-Dad.
Create a persongroup person: Family1-Mom.
Create a persongroup person: Family2-Lady.
Create a persongroup person: Family1-Son.
Create a persongroup person: Family1-Daughter.
Create a persongroup person: Family2-Man.
Add face to the person group person: (Family1-Son) from image: Family1-Son2.jpg.
Add face to the person group person: (Family1-Dad) from image: Family1-Dad2.jpg.
Add face to the person group person: (Family1-Mom) from image: Family1-Mom1.jpg.
Add face to the person group person: (Family2-Man) from image: Family2-Man1.jpg.
Add face to the person group person: (Family1-Son) from image: Family1-Son1.jpg.
Add face to the person group person: (Family2-Lady) from image: Family2-Lady2.jpg.
Add face to the person group person: (Family1-Mom) from image: Family1-Mom2.jpg.
Add face to the person group person: (Family1-Dad) from image: Family1-Dad1.jpg.
Add face to the person group person: (Family2-Man) from image: Family2-Man2.jpg.
Add face to the person group person: (Family2-Lady) from image: Family2-Lady1.jpg.
Done adding faces to person group.

Training person group: c08484e0-044b-4610-8b7e-c957584e5d2d.
Waiting 10 seconds...
Training status: succeeded.

Person: Family1-Mom is identified for face in: identification1.jpg with ID: b7f7f542-c338-4a40-ad52-e61772bc6e14. Confidence: 0.96921.
Person: Family1-Son is identified for face in: identification1.jpg with ID: 600dc1b4-b2c4-4516-87de-edbbdd8d7632. Confidence: 0.92886.
Person: Family1-Dad is identified for face in: identification1.jpg with ID: e83b494f-9ad2-473f-9d86-3de79c01e345. Confidence: 0.96725.

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face client library for JavaScript to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Get started with facial recognition using the Face client library for Python. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (PiPy) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • Python 3.x
    • Your Python installation should include pip. You can check if you have pip installed by running pip --version on the command line. Get pip by installing the latest version of Python.
  • Your Azure account must have a Cognitive Services Contributor role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the Assign roles documentation, or contact your administrator.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Identify faces

  1. Install the client library

    After installing Python, you can install the client library with:

    pip install --upgrade azure-cognitiveservices-vision-face
    
  2. Create a new Python application

    Create a new Python script—quickstart-file.py, for example. Then open it in your preferred editor or IDE and paste in the following code.

    import asyncio
    import io
    import glob
    import os
    import sys
    import time
    import uuid
    import requests
    from urllib.parse import urlparse
    from io import BytesIO
    # To install this module, run:
    # python -m pip install Pillow
    from PIL import Image, ImageDraw
    from azure.cognitiveservices.vision.face import FaceClient
    from msrest.authentication import CognitiveServicesCredentials
    from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, QualityForRecognition
    
    
    # This key will serve all examples in this document.
    KEY = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE"
    
    # This endpoint will be used in all examples in this quickstart.
    ENDPOINT = "PASTE_YOUR_FACE_ENDPOINT_HERE"
    
    # Base url for the Verify and Facelist/Large Facelist operations
    IMAGE_BASE_URL = 'https://csdx.blob.core.windows.net/resources/Face/Images/'
    
    # Used in the Person Group Operations and Delete Person Group examples.
    # You can call list_person_groups to print a list of preexisting PersonGroups.
    # SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
    PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
    
    # Used for the Delete Person Group example.
    TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
    
    
    # Create an authenticated FaceClient.
    face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
    
    '''
    Detect faces 
    Detect faces in two images (get ID), draw rectangle around a third image.
    '''
    print('-----------------------------')
    print()
    print('DETECT FACES')
    print()
    # Detect a face in an image that contains a single face
    single_face_image_url = 'https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg'
    single_image_name = os.path.basename(single_face_image_url)
    # We use detection model 3 to get better performance.
    detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03')
    if not detected_faces:
        raise Exception('No face detected from image {}'.format(single_image_name))
    
    # Display the detected face ID in the first single-face image.
    # Face IDs are used for comparison to faces (their IDs) detected in other images.
    print('Detected face ID from', single_image_name, ':')
    for face in detected_faces: print (face.face_id)
    print()
    
    # Save this ID for use in Find Similar
    first_image_face_ID = detected_faces[0].face_id
    
    # Detect the faces in an image that contains multiple faces
    # Each detected face gets assigned a new ID
    multi_face_image_url = "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
    multi_image_name = os.path.basename(multi_face_image_url)
    # We use detection model 3 to get better performance.
    detected_faces2 = face_client.face.detect_with_url(url=multi_face_image_url, detection_model='detection_03')
    
    print('Detected face IDs from', multi_image_name, ':')
    if not detected_faces2:
        raise Exception('No face detected from image {}.'.format(multi_image_name))
    else:
        for face in detected_faces2:
            print(face.face_id)
    print()
    
    '''
    Print image and draw rectangles around faces
    '''
    # Detect a face in an image that contains a single face
    single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
    single_image_name = os.path.basename(single_face_image_url)
    # We use detection model 3 to get better performance.
    detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03')
    if not detected_faces:
        raise Exception('No face detected from image {}'.format(single_image_name))
    
    # Convert width height to a point in a rectangle
    def getRectangle(faceDictionary):
        rect = faceDictionary.face_rectangle
        left = rect.left
        top = rect.top
        right = left + rect.width
        bottom = top + rect.height
        
        return ((left, top), (right, bottom))
    
    def drawFaceRectangles() :
    # Download the image from the url
        response = requests.get(single_face_image_url)
        img = Image.open(BytesIO(response.content))
    
    # For each face returned use the face rectangle and draw a red box.
        print('Drawing rectangle around face... see popup for results.')
        draw = ImageDraw.Draw(img)
        for face in detected_faces:
            draw.rectangle(getRectangle(face), outline='red')
    
    # Display the image in the default image browser.
        img.show()
    
    # Uncomment this to show the face rectangles.
    #    drawFaceRectangles()
    
    print()
    '''
    END - Detect faces
    '''
    
    
    '''
    Create/Train/Detect/Identify Person Group
    This example creates a Person Group, then trains it. It can then be used to detect and identify faces in other group images.
    '''
    print('-----------------------------')
    print()
    print('PERSON GROUP OPERATIONS')
    print()
    
    '''
    Create the PersonGroup
    '''
    # Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
    print('Person group:', PERSON_GROUP_ID)
    face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)
    
    # Define woman friend
    woman = face_client.person_group_person.create(PERSON_GROUP_ID, "Woman")
    # Define man friend
    man = face_client.person_group_person.create(PERSON_GROUP_ID, "Man")
    # Define child friend
    child = face_client.person_group_person.create(PERSON_GROUP_ID, "Child")
    
    '''
    Detect faces and register to correct person
    '''
    # Find all jpeg images of friends in working directory
    woman_images = [file for file in glob.glob('*.jpg') if file.startswith("w")]
    man_images = [file for file in glob.glob('*.jpg') if file.startswith("m")]
    child_images = [file for file in glob.glob('*.jpg') if file.startswith("ch")]
    
    # Add to a woman person
    for image in woman_images:
        w = open(image, 'r+b')
        # Check if the image is of sufficent quality for recognition.
        sufficientQuality = True
        detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
        for face in detected_faces:
            if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
                sufficientQuality = False
                break
        if not sufficientQuality: continue
        face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, woman.person_id, w)
    
    # Add to a man person
    for image in man_images:
        m = open(image, 'r+b')
        # Check if the image is of sufficent quality for recognition.
        sufficientQuality = True
        detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
        for face in detected_faces:
            if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
                sufficientQuality = False
                break
        if not sufficientQuality: continue
        face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, man.person_id, m)
    
    # Add to a child person
    for image in child_images:
        ch = open(image, 'r+b')
        # Check if the image is of sufficent quality for recognition.
        sufficientQuality = True
        detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
        for face in detected_faces:
            if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
                sufficientQuality = False
                break
        if not sufficientQuality: continue
        face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, child.person_id, ch)
    
    '''
    Train PersonGroup
    '''
    print()
    print('Training the person group...')
    # Train the person group
    face_client.person_group.train(PERSON_GROUP_ID)
    
    while (True):
        training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
        print("Training status: {}.".format(training_status.status))
        print()
        if (training_status.status is TrainingStatusType.succeeded):
            break
        elif (training_status.status is TrainingStatusType.failed):
            face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
            sys.exit('Training the person group has failed.')
        time.sleep(5)
    
    '''
    Identify a face against a defined PersonGroup
    '''
    # Group image for testing against
    test_image_array = glob.glob('test-image-person-group.jpg')
    image = open(test_image_array[0], 'r+b')
    
    print('Pausing for 60 seconds to avoid triggering rate limit on free account...')
    time.sleep (60)
    
    # Detect faces
    face_ids = []
    # We use detection model 3 to get better performance, recognition model 4 to support quality for recognition attribute.
    faces = face_client.face.detect_with_stream(image, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
    for face in faces:
        # Only take the face if it is of sufficient quality.
        if face.face_attributes.quality_for_recognition == QualityForRecognition.high or face.face_attributes.quality_for_recognition == QualityForRecognition.medium:
            face_ids.append(face.face_id)
    
    # Identify faces
    results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
    print('Identifying faces in {}'.format(os.path.basename(image.name)))
    if not results:
        print('No person identified in the person group for faces from {}.'.format(os.path.basename(image.name)))
    for person in results:
        if len(person.candidates) > 0:
            print('Person for face ID {} is identified in {} with a confidence of {}.'.format(person.face_id, os.path.basename(image.name), person.candidates[0].confidence)) # Get topmost confidence score
        else:
            print('No person identified for face ID {} in {}.'.format(person.face_id, os.path.basename(image.name)))
    print()
    '''
    END - Create/Train/Detect/Identify Person Group
    '''
    
    print()
    print('-----------------------------')
    print('End of quickstart.')
    
    
  3. Enter your key and endpoint into the corresponding fields.

    Important

    Go to the Azure portal. If the Face resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

    Important

    Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services security article for more information.

  4. Run your face recognition app from the application directory with the python command.

    python quickstart-file.py
    

    Tip

    The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

To delete the PersonGroup you created in this quickstart, run the following code in your script:

# Delete the main person group.
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))
print()

Next steps

In this quickstart, you learned how to use the Face client library for Python to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Get started with facial recognition using the Face REST API. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

Note

This quickstart uses cURL commands to call the REST API. You can also call the REST API using a programming language. Complex scenarios like face identification are easier to implement using a language SDK. See the GitHub samples for examples in C#, Python, Java, JavaScript, and Go.

Prerequisites

  • Azure subscription - Create one for free
  • Your Azure account must have a Cognitive Services Contributor role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the Assign roles documentation, or contact your administrator.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • PowerShell version 6.0+, or a similar command-line application.

Identify faces

  1. First, call the Detect API on the source face. This is the face that we'll try to identify from the larger group. Copy the following command to a text editor, insert your own key, and then copy it into a shell window and run it.

    curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes={string}&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{\"url\":\"https://csdx.blob.core.windows.net/resources/Face/Images/identification1.jpg\"}"
    

    Save the returned face ID string to a temporary location. You'll use it again at the end.

  2. Next you'll need to create a LargePersonGroup. This object will store the aggregated face data of several persons. Run the following command, inserting your own key. Optionally, change the group's name and metadata in the request body.

    curl -v -X PUT "https://westus.api.cognitive.microsoft.com/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        \"name\": \"large-person-group-name\",
        \"userData\": \"User-provided data attached to the large person group.\",
        \"recognitionModel\": \"recognition_03\"
    }"
    

    Save the returned ID of the created group to a temporary location.

  3. Next, you'll create Person objects that belong to the group. Run the following command, inserting your own key and the ID of the LargePersonGroup from the previous step. This command creates a Person named "Family1-Dad".

    curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/largepersongroups/{largePersonGroupId}/persons" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        \"name\": \"Family1-Dad\",
        \"userData\": \"User-provided data attached to the person.\"
    }"
    

    After you run this command, run it again with different input data to create more Person objects: "Family1-Mom", "Family1-Son", "Family1-Daughter", "Family2-Lady", and "Family2-Man".

    Save the IDs of each Person created; it's important to keep track of which person name has which ID.

  4. Next you'll need to detect new faces and associate them with the Person objects that exist. The following command detects a face from the image Family1-Dad.jpg and adds it to the corresponding person. You need to specify the personId as the ID that was returned when you created the "Family1-Dad" Person object. The image name corresponds to the name of the created Person. Also enter the LargePersonGroup ID and your key in the appropriate fields.

    curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{\"url\":\"https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Dad.jpg\"}"
    

    Then, run the above command again with a different source image and target Person. The images available are: Family1-Dad1.jpg, Family1-Dad2.jpg Family1-Mom1.jpg, Family1-Mom2.jpg, Family1-Son1.jpg, Family1-Son2.jpg, Family1-Daughter1.jpg, Family1-Daughter2.jpg, Family2-Lady1.jpg, Family2-Lady2.jpg, Family2-Man1.jpg, and Family2-Man2.jpg. Be sure that the Person whose ID you specify in the API call matches the name of the image file in the request body.

    At the end of this step, you should have multiple Person objects that each have one or more corresponding faces, detected directly from the provided images.

  5. Next, train the LargePersonGroup with the current face data. The training operation teaches the model how to associate facial features, sometimes aggregated from multiple source images, to each single person. Insert the LargePersonGroup ID and your key before running the command.

    curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/largepersongroups/{largePersonGroupId}/train" -H "Ocp-Apim-Subscription-Key: {subscription key}"
    
  6. Now you're ready to call the Identify API, using the source face ID from the first step and the LargePersonGroup ID. Insert these values into the appropriate fields in the request body, and insert your key.

    curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/identify" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        \"largePersonGroupId\": \"INSERT_PERSONGROUP_NAME\",
        \"faceIds\": [
            \"INSERT_SOURCE_FACE_ID\"
        ],  
        \"maxNumOfCandidatesReturned\": 1,
        \"confidenceThreshold\": 0.5
    }"
    

    The response should give you a Person ID indicating the person identified with the source face. It should be the ID that corresponds to the "Family1-Dad" person, because the source face is of that person.

Clean up resources

To delete the LargePersonGroup you created in this exercise, run the LargePersonGroup - Delete call.

curl -v -X DELETE "https://westus.api.cognitive.microsoft.com/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face REST API to do basic facial recognition tasks. Next, learn about the different face detection models and how to specify the right model for your use case.