Quickstart: Use the Face client library

Get started with facial recognition using the Face client library for .NET. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

Use the Face client library for .NET to:

Reference documentation | Library source code | Package (NuGet) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The Visual Studio IDE or current version of .NET Core.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Create a new C# application

Using Visual Studio, create a new .NET Core application.

Install the client library

Once you've created a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, check Include prerelease, and search for Microsoft.Azure.CognitiveServices.Vision.Face. Select version 2.6.0-preview.1, and then Install.

Tip

Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.

From the project directory, open the program.cs file and add the following using directives:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;

In the application's Program class, create variables for your resource's key and endpoint.

Important

Go to the Azure portal. If the [Product name] resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services security article for more information.

// From your Face subscription in the Azure portal, get your subscription key and endpoint.
const string SUBSCRIPTION_KEY = "<your subscription key>";
const string ENDPOINT = "<your api endpoint>";

In the application's Main method, add calls for the methods used in this quickstart. You will implement these later.

// Authenticate.
IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);

// Detect - get features from faces.
DetectFaceExtract(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Find Similar - find a similar face from a list of faces.
FindSimilar(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Verify - compare two images if the same person or not.
Verify(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();

// Identify - recognize a face(s) in a person group (a person group is created in this example).
IdentifyInPersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// LargePersonGroup - create, then get data.
LargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Group faces - automatically group similar faces.
Group(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// FaceList - create a face list, then get data

Object model

The following classes and interfaces handle some of the major features of the Face .NET client library:

Name Description
FaceClient This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
FaceOperations This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFace This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face.
FaceListOperations This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonExtensions This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupOperations This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.

Code examples

The code snippets below show you how to do the following tasks with the Face client library for .NET:

Authenticate the client

In a new method, instantiate a client with your endpoint and key. Create a ApiKeyServiceClientCredentials object with your key, and use it with your endpoint to create a FaceClient object.

/*
 *	AUTHENTICATE
 *	Uses subscription key and region to create a client.
 */
public static IFaceClient Authenticate(string endpoint, string key)
{
    return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
}

Declare helper fields

The following fields are needed for several of the Face operations you'll add later. At the root of your Program class, define the following URL string. This URL points to a folder of sample images.

// Used for all examples.
// URL for the images.
const string IMAGE_BASE_URL = "https://csdx.blob.core.windows.net/resources/Face/Images/";

In your Main method, define strings to point to the different recognition model types. Later on, you'll be able to specify which recognition model you want to use for face detection. See Specify a recognition model for information on these options.

// Recognition model 3 was released in 2020 May.
// It is recommended since its overall accuracy is improved
// compared with models 1 and 2.
const string RECOGNITION_MODEL3 = RecognitionModel.Recognition03;

Detect faces in an image

Get detected face objects

Create a new method to detect faces. The DetectFaceExtract method processes three of the images at the given URL and creates a list of DetectedFace objects in program memory. The list of FaceAttributeType values specifies which features to extract.

/* 
 * DETECT FACES
 * Detects features from faces and IDs them.
 */
public static async Task DetectFaceExtract(IFaceClient client, string url, string recognitionModel)
{
    Console.WriteLine("========DETECT FACES========");
    Console.WriteLine();

    // Create a list of images
    List<string> imageFileNames = new List<string>
                    {
                        "detection1.jpg",    // single female with glasses
                        // "detection2.jpg", // (optional: single man)
                        // "detection3.jpg", // (optional: single male construction worker)
                        // "detection4.jpg", // (optional: 3 people at cafe, 1 is blurred)
                        "detection5.jpg",    // family, woman child man
                        "detection6.jpg"     // elderly couple, male female
                    };

    foreach (var imageFileName in imageFileNames)
    {
        IList<DetectedFace> detectedFaces;

        // Detect faces with all attributes from image url.
        detectedFaces = await client.Face.DetectWithUrlAsync($"{url}{imageFileName}",
                returnFaceAttributes: new List<FaceAttributeType?> { FaceAttributeType.Accessories, FaceAttributeType.Age,
                FaceAttributeType.Blur, FaceAttributeType.Emotion, FaceAttributeType.Exposure, FaceAttributeType.FacialHair,
                FaceAttributeType.Gender, FaceAttributeType.Glasses, FaceAttributeType.Hair, FaceAttributeType.HeadPose,
                FaceAttributeType.Makeup, FaceAttributeType.Noise, FaceAttributeType.Occlusion, FaceAttributeType.Smile },
                // We specify detection model 1 because we are retrieving attributes.
                detectionModel: DetectionModel.Detection01,
                recognitionModel: recognitionModel);

        Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{imageFileName}`.");

Tip

You can also detect faces in a local image. See the IFaceOperations methods such as DetectWithStreamAsync.

Display detected face data

The rest of the DetectFaceExtract method parses and prints the attribute data for each detected face. Each attribute must be specified separately in the original face detection API call (in the FaceAttributeType list). The following code processes every attribute, but you will likely only need to use one or a few.

        // Parse and print all attributes of each detected face.
        foreach (var face in detectedFaces)
        {
            Console.WriteLine($"Face attributes for {imageFileName}:");

            // Get bounding box of the faces
            Console.WriteLine($"Rectangle(Left/Top/Width/Height) : {face.FaceRectangle.Left} {face.FaceRectangle.Top} {face.FaceRectangle.Width} {face.FaceRectangle.Height}");

            // Get accessories of the faces
            List<Accessory> accessoriesList = (List<Accessory>)face.FaceAttributes.Accessories;
            int count = face.FaceAttributes.Accessories.Count;
            string accessory; string[] accessoryArray = new string[count];
            if (count == 0) { accessory = "NoAccessories"; }
            else
            {
                for (int i = 0; i < count; ++i) { accessoryArray[i] = accessoriesList[i].Type.ToString(); }
                accessory = string.Join(",", accessoryArray);
            }
            Console.WriteLine($"Accessories : {accessory}");

            // Get face other attributes
            Console.WriteLine($"Age : {face.FaceAttributes.Age}");
            Console.WriteLine($"Blur : {face.FaceAttributes.Blur.BlurLevel}");

            // Get emotion on the face
            string emotionType = string.Empty;
            double emotionValue = 0.0;
            Emotion emotion = face.FaceAttributes.Emotion;
            if (emotion.Anger > emotionValue) { emotionValue = emotion.Anger; emotionType = "Anger"; }
            if (emotion.Contempt > emotionValue) { emotionValue = emotion.Contempt; emotionType = "Contempt"; }
            if (emotion.Disgust > emotionValue) { emotionValue = emotion.Disgust; emotionType = "Disgust"; }
            if (emotion.Fear > emotionValue) { emotionValue = emotion.Fear; emotionType = "Fear"; }
            if (emotion.Happiness > emotionValue) { emotionValue = emotion.Happiness; emotionType = "Happiness"; }
            if (emotion.Neutral > emotionValue) { emotionValue = emotion.Neutral; emotionType = "Neutral"; }
            if (emotion.Sadness > emotionValue) { emotionValue = emotion.Sadness; emotionType = "Sadness"; }
            if (emotion.Surprise > emotionValue) { emotionType = "Surprise"; }
            Console.WriteLine($"Emotion : {emotionType}");

            // Get more face attributes
            Console.WriteLine($"Exposure : {face.FaceAttributes.Exposure.ExposureLevel}");
            Console.WriteLine($"FacialHair : {string.Format("{0}", face.FaceAttributes.FacialHair.Moustache + face.FaceAttributes.FacialHair.Beard + face.FaceAttributes.FacialHair.Sideburns > 0 ? "Yes" : "No")}");
            Console.WriteLine($"Gender : {face.FaceAttributes.Gender}");
            Console.WriteLine($"Glasses : {face.FaceAttributes.Glasses}");

            // Get hair color
            Hair hair = face.FaceAttributes.Hair;
            string color = null;
            if (hair.HairColor.Count == 0) { if (hair.Invisible) { color = "Invisible"; } else { color = "Bald"; } }
            HairColorType returnColor = HairColorType.Unknown;
            double maxConfidence = 0.0f;
            foreach (HairColor hairColor in hair.HairColor)
            {
                if (hairColor.Confidence <= maxConfidence) { continue; }
                maxConfidence = hairColor.Confidence; returnColor = hairColor.Color; color = returnColor.ToString();
            }
            Console.WriteLine($"Hair : {color}");

            // Get more attributes
            Console.WriteLine($"HeadPose : {string.Format("Pitch: {0}, Roll: {1}, Yaw: {2}", Math.Round(face.FaceAttributes.HeadPose.Pitch, 2), Math.Round(face.FaceAttributes.HeadPose.Roll, 2), Math.Round(face.FaceAttributes.HeadPose.Yaw, 2))}");
            Console.WriteLine($"Makeup : {string.Format("{0}", (face.FaceAttributes.Makeup.EyeMakeup || face.FaceAttributes.Makeup.LipMakeup) ? "Yes" : "No")}");
            Console.WriteLine($"Noise : {face.FaceAttributes.Noise.NoiseLevel}");
            Console.WriteLine($"Occlusion : {string.Format("EyeOccluded: {0}", face.FaceAttributes.Occlusion.EyeOccluded ? "Yes" : "No")} " +
                $" {string.Format("ForeheadOccluded: {0}", face.FaceAttributes.Occlusion.ForeheadOccluded ? "Yes" : "No")}   {string.Format("MouthOccluded: {0}", face.FaceAttributes.Occlusion.MouthOccluded ? "Yes" : "No")}");
            Console.WriteLine($"Smile : {face.FaceAttributes.Smile}");
            Console.WriteLine();
        }
    }
}

Find similar faces

The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.

Detect faces for comparison

First, define a second face detection method. You need to detect faces in images before you can compare them, and this detection method is optimized for comparison operations. It doesn't extract detailed face attributes like in the section above, and it uses a different recognition model.

private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string recognition_model)
{
    // Detect faces from image URL. Since only recognizing, use the recognition model 1.
    // We use detection model 2 because we are not retrieving attributes.
    IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: recognition_model, detectionModel: DetectionModel.Detection02);
    Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}`");
    return detectedFaces.ToList();
}

Find matches

The following method detects faces in a set of target images and in a single source image. Then, it compares them and finds all the target images that are similar to the source image.

/*
 * FIND SIMILAR
 * This example will take an image and find a similar one to it in another image.
 */
public static async Task FindSimilar(IFaceClient client, string url, string recognition_model)
{
    Console.WriteLine("========FIND SIMILAR========");
    Console.WriteLine();

    List<string> targetImageFileNames = new List<string>
                        {
                            "Family1-Dad1.jpg",
                            "Family1-Daughter1.jpg",
                            "Family1-Mom1.jpg",
                            "Family1-Son1.jpg",
                            "Family2-Lady1.jpg",
                            "Family2-Man1.jpg",
                            "Family3-Lady1.jpg",
                            "Family3-Man1.jpg"
                        };

    string sourceImageFileName = "findsimilar.jpg";
    IList<Guid?> targetFaceIds = new List<Guid?>();
    foreach (var targetImageFileName in targetImageFileNames)
    {
        // Detect faces from target image url.
        var faces = await DetectFaceRecognize(client, $"{url}{targetImageFileName}", recognition_model);
        // Add detected faceId to list of GUIDs.
        targetFaceIds.Add(faces[0].FaceId.Value);
    }

    // Detect faces from source image url.
    IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognition_model);
    Console.WriteLine();

    // Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
    IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);

The following code prints the match details to the console:

foreach (var similarResult in similarResults)
{
    Console.WriteLine($"Faces from {sourceImageFileName} & ID:{similarResult.FaceId} are similar with confidence: {similarResult.Confidence}.");
}
Console.WriteLine();

Identify a face

The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known. In order to do the Identify operation, you first need to create and train a PersonGroup

Create a person group

The following code creates a PersonGroup with six different Person objects. It associates each Person with a set of example images, and then it trains to recognize each person by their facial characteristics. Person and PersonGroup objects are used in the Verify, Identify, and Group operations.

Declare a string variable at the root of your class to represent the ID of the PersonGroup you'll create.

static string personGroupId = Guid.NewGuid().ToString();

In a new method, add the following code. This method will carry out the Identify operation. The first block of code associates the names of persons with their example images.

public static async Task IdentifyInPersonGroup(IFaceClient client, string url, string recognitionModel)
{
    Console.WriteLine("========IDENTIFY FACES========");
    Console.WriteLine();

    // Create a dictionary for all your images, grouping similar ones under the same key.
    Dictionary<string, string[]> personDictionary =
        new Dictionary<string, string[]>
            { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
              { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
              { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } },
              { "Family1-Daughter", new[] { "Family1-Daughter1.jpg", "Family1-Daughter2.jpg" } },
              { "Family2-Lady", new[] { "Family2-Lady1.jpg", "Family2-Lady2.jpg" } },
              { "Family2-Man", new[] { "Family2-Man1.jpg", "Family2-Man2.jpg" } }
            };
    // A group photo that includes some of the persons you seek to identify from your dictionary.
    string sourceImageFileName = "identification1.jpg";

Notice that this code defines a variable sourceImageFileName. This variable corresponds to the source image—the image that contains people to identify.

Next, add the following code to create a Person object for each person in the Dictionary and add the face data from the appropriate images. Each Person object is associated with the same PersonGroup through its unique ID string. Remember to pass the variables client, url, and RECOGNITION_MODEL1 into this method.

// Create a person group. 
Console.WriteLine($"Create a person group ({personGroupId}).");
await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
// The similar faces will be grouped into a single person group person.
foreach (var groupedFace in personDictionary.Keys)
{
    // Limit TPS
    await Task.Delay(250);
    Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
    Console.WriteLine($"Create a person group person '{groupedFace}'.");

    // Add face to the person group person.
    foreach (var similarImage in personDictionary[groupedFace])
    {
        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
        PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
            $"{url}{similarImage}", similarImage);
    }
}

Tip

You can also create a PersonGroup from local images. See the IPersonGroupPerson methods such as AddFaceFromStreamAsync.

Train the PersonGroup

Once you've extracted face data from your images and sorted it into different Person objects, you must train the PersonGroup to identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the results, printing the status to the console.

// Start to train the person group.
Console.WriteLine();
Console.WriteLine($"Train person group {personGroupId}.");
await client.PersonGroup.TrainAsync(personGroupId);

// Wait until the training is completed.
while (true)
{
    await Task.Delay(1000);
    var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
    Console.WriteLine($"Training status: {trainingStatus.Status}.");
    if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
}
Console.WriteLine();

Tip

The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

This Person group and its associated Person objects are now ready to be used in the Verify, Identify, or Group operations.

Identify faces

The following code takes the source image and creates a list of all the faces detected in the image. These are the faces that will be identified against the PersonGroup.

List<Guid?> sourceFaceIds = new List<Guid?>();
// Detect faces from source image url.
List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);

// Add detected faceId to sourceFaceIds.
foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }

The next code snippet calls the IdentifyAsync operation and prints the results to the console. Here, the service attempts to match each face from the source image to a Person in the given PersonGroup. This closes out your Identify method.

    // Identify the faces in a person group. 
    var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);

    foreach (var identifyResult in identifyResults)
    {
        Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates[0].PersonId);
        Console.WriteLine($"Person '{person.Name}' is identified for face in: {sourceImageFileName} - {identifyResult.FaceId}," +
            $" confidence: {identifyResult.Candidates[0].Confidence}.");
    }
    Console.WriteLine();
}

Run the application

Run the application by clicking the Debug button at the top of the IDE window.

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your program:

// At end, delete person groups in both regions (since testing only)
Console.WriteLine("========DELETE PERSON GROUP========");
Console.WriteLine();
DeletePersonGroup(client, personGroupId).Wait();

Define the deletion method with the following code:

/*
 * DELETE PERSON GROUP
 * After this entire example is executed, delete the person group in your Azure account,
 * otherwise you cannot recreate one with the same name (if running example repeatedly).
 */
public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
{
    await client.PersonGroup.DeleteAsync(personGroupId);
    Console.WriteLine($"Deleted the person group {personGroupId}.");
}

Next steps

In this quickstart, you learned how to use the Face client library for .NET to do basis facial recognition tasks. Next, explore the reference documentation to learn more about the library.

Get started with facial recognition using the Face client library for Python. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

Use the Face client library for Python to:

Reference documentation | Library source code | Package (PiPy) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • Python 3.x
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Install the client library

After installing Python, you can install the client library with:

pip install --upgrade azure-cognitiveservices-vision-face

Create a new Python application

Create a new Python script—quickstart-file.py, for example. Then open it in your preferred editor or IDE and import the following libraries.

import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
# To install this module, run:
# python -m pip install Pillow
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person

Tip

Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.

Then, create variables for your resource's Azure endpoint and key.

# This key will serve all examples in this document.
KEY = "<your subscription key>"

# This endpoint will be used in all examples in this quickstart.
ENDPOINT = "<your api endpoint>"

Important

Go to the Azure portal. If the [Product name] resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For example, Azure key vault.

Object model

The following classes and interfaces handle some of the major features of the Face Python client library.

Name Description
FaceClient This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
FaceOperations This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFace This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face.
FaceListOperations This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonOperations This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupOperations This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.
ShapshotOperations This class manages the Snapshot functionality; you can use it to temporarily save all of your cloud-based face data and migrate that data to a new Azure subscription.

Code examples

These code snippets show you how to do the following tasks with the Face client library for Python:

Authenticate the client

Instantiate a client with your endpoint and key. Create a CognitiveServicesCredentials object with your key, and use it with your endpoint to create a FaceClient object.

# Create an authenticated FaceClient.
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

Detect faces in an image

The following code detects a face in a remote image. It prints the detected face's ID to the console and also stores it in program memory. Then, it detects the faces in an image with multiple people and prints their IDs to the console as well. By changing the parameters in the detect_with_url method, you can return different information with each DetectedFace object.

# Detect a face in an image that contains a single face
single_face_image_url = 'https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg'
single_image_name = os.path.basename(single_face_image_url)
# We use detection model 2 because we are not retrieving attributes.
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detectionModel='detection_02')
if not detected_faces:
    raise Exception('No face detected from image {}'.format(single_image_name))

# Display the detected face ID in the first single-face image.
# Face IDs are used for comparison to faces (their IDs) detected in other images.
print('Detected face ID from', single_image_name, ':')
for face in detected_faces: print (face.face_id)
print()

# Save this ID for use in Find Similar
first_image_face_ID = detected_faces[0].face_id

Tip

You can also detect faces in a local image. See the FaceOperations methods such as detect_with_stream.

Display and frame faces

The following code outputs the given image to the display and draws rectangles around the faces, using the DetectedFace.faceRectangle property.

# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
# We use detection model 2 because we are not retrieving attributes.
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detectionModel='detection_02')
if not detected_faces:
    raise Exception('No face detected from image {}'.format(single_image_name))

# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
    rect = faceDictionary.face_rectangle
    left = rect.left
    top = rect.top
    right = left + rect.width
    bottom = top + rect.height
    
    return ((left, top), (right, bottom))


# Download the image from the url
response = requests.get(single_face_image_url)
img = Image.open(BytesIO(response.content))

# For each face returned use the face rectangle and draw a red box.
print('Drawing rectangle around face... see popup for results.')
draw = ImageDraw.Draw(img)
for face in detected_faces:
    draw.rectangle(getRectangle(face), outline='red')

# Display the image in the users default image browser.
img.show()

A young woman with a red rectangle drawn around the face

Find similar faces

The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.

Find matches

First, run the code in the above section (Detect faces in an image) to save a reference to a single face. Then run the following code to get references to several faces in a group image.

# Detect the faces in an image that contains multiple faces
# Each detected face gets assigned a new ID
multi_face_image_url = "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
multi_image_name = os.path.basename(multi_face_image_url)
# We use detection model 2 because we are not retrieving attributes.
detected_faces2 = face_client.face.detect_with_url(url=multi_face_image_url, detectionModel='detection_02')

Then add the following code block to find instances of the first face in the group. See the find_similar method to learn how to modify this behavior.

# Search through faces detected in group image for the single face from first image.
# First, create a list of the face IDs found in the second image.
second_image_face_IDs = list(map(lambda x: x.face_id, detected_faces2))
# Next, find similar face IDs like the one detected in the first image.
similar_faces = face_client.face.find_similar(face_id=first_image_face_ID, face_ids=second_image_face_IDs)
if not similar_faces[0]:
    print('No similar faces found in', multi_image_name, '.')

Use the following code to print the match details to the console.

# Print the details of the similar faces detected
print('Similar faces found in', multi_image_name + ':')
for face in similar_faces:
    first_image_face_ID = face.face_id
    # The similar face IDs of the single face image and the group image do not need to match, 
    # they are only used for identification purposes in each image.
    # The similar faces are matched using the Cognitive Services algorithm in find_similar().
    face_info = next(x for x in detected_faces2 if x.face_id == first_image_face_ID)
    if face_info:
        print('  Face ID: ', first_image_face_ID)
        print('  Face rectangle:')
        print('    Left: ', str(face_info.face_rectangle.left))
        print('    Top: ', str(face_info.face_rectangle.top))
        print('    Width: ', str(face_info.face_rectangle.width))
        print('    Height: ', str(face_info.face_rectangle.height))

Create and train a person group

The following code creates a PersonGroup with three different Person objects. It associates each Person with a set of example images, and then it trains to be able to recognize each person.

Create PersonGroup

To step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

This group of images contains three sets of face images corresponding to three different people. The code will define three Person objects and associate them with image files that start with woman, man, and child.

Once you've set up your images, define a label at the top of your script for the PersonGroup object you'll create.

# Used in the Person Group Operations and Delete Person Group examples.
# You can call list_person_groups to print a list of preexisting PersonGroups.
# SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)

# Used for the Delete Person Group example.
TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)

Then add the following code to the bottom of your script. This code creates a PersonGroup and three Person objects.

'''
Create the PersonGroup
'''
# Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
print('Person group:', PERSON_GROUP_ID)
face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)

# Define woman friend
woman = face_client.person_group_person.create(PERSON_GROUP_ID, "Woman")
# Define man friend
man = face_client.person_group_person.create(PERSON_GROUP_ID, "Man")
# Define child friend
child = face_client.person_group_person.create(PERSON_GROUP_ID, "Child")

Assign faces to Persons

The following code sorts your images by their prefix, detects faces, and assigns the faces to each Person object.

'''
Detect faces and register to correct person
'''
# Find all jpeg images of friends in working directory
woman_images = [file for file in glob.glob('*.jpg') if file.startswith("w")]
man_images = [file for file in glob.glob('*.jpg') if file.startswith("m")]
child_images = [file for file in glob.glob('*.jpg') if file.startswith("ch")]

# Add to a woman person
for image in woman_images:
    w = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, woman.person_id, w)

# Add to a man person
for image in man_images:
    m = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, man.person_id, m)

# Add to a child person
for image in child_images:
    ch = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, child.person_id, ch)

Tip

You can also create a PersonGroup from remote images referenced by URL. See the PersonGroupPersonOperations methods such as add_face_from_url.

Train PersonGroup

Once you've assigned faces, you must train the PersonGroup so that it can identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the result, printing the status to the console.

'''
Train PersonGroup
'''
print()
print('Training the person group...')
# Train the person group
face_client.person_group.train(PERSON_GROUP_ID)

while (True):
    training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
    print("Training status: {}.".format(training_status.status))
    print()
    if (training_status.status is TrainingStatusType.succeeded):
        break
    elif (training_status.status is TrainingStatusType.failed):
        sys.exit('Training the person group has failed.')
    time.sleep(5)

Tip

The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

Identify a face

The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.

Important

In order to run this example, you must first run the code in Create and train a person group.

Get a test image

The following code looks in the root of your project for an image test-image-person-group.jpg and detects the faces in the image. You can find this image with the images used for PersonGroup management: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

'''
Identify a face against a defined PersonGroup
'''
# Group image for testing against
test_image_array = glob.glob('test-image-person-group.jpg')
image = open(test_image_array[0], 'r+b')

print('Pausing for 60 seconds to avoid triggering rate limit on free account...')
time.sleep (60)

# Detect faces
face_ids = []
# We use detection model 2 because we are not retrieving attributes.
faces = face_client.face.detect_with_stream(image, detectionModel='detection_02')
for face in faces:
    face_ids.append(face.face_id)

Identify faces

The identify method takes an array of detected faces and compares them to a PersonGroup. If it can match a detected face to a Person, it saves the result. This code prints detailed match results to the console.

# Identify faces
results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
print('Identifying faces in {}'.format(os.path.basename(image.name)))
if not results:
    print('No person identified in the person group for faces from {}.'.format(os.path.basename(image.name)))
for person in results:
    if len(person.candidates) > 0:
        print('Person for face ID {} is identified in {} with a confidence of {}.'.format(person.face_id, os.path.basename(image.name), person.candidates[0].confidence)) # Get topmost confidence score
    else:
        print('No person identified for face ID {} in {}.'.format(person.face_id, os.path.basename(image.name)))

Verify faces

The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.

The following code detects faces in two source images and then verifies them against a face detected from a target image.

Get test images

The following code blocks declare variables that will point to the source and target images for the verification operation.

# Base url for the Verify and Facelist/Large Facelist operations
IMAGE_BASE_URL = 'https://csdx.blob.core.windows.net/resources/Face/Images/'
# Create a list to hold the target photos of the same person
target_image_file_names = ['Family1-Dad1.jpg', 'Family1-Dad2.jpg']
# The source photos contain this person
source_image_file_name1 = 'Family1-Dad3.jpg'
source_image_file_name2 = 'Family1-Son1.jpg'

Detect faces for verification

The following code detects faces in the source and target images and saves them to variables.

# Detect face(s) from source image 1, returns a list[DetectedFaces]
# We use detection model 2 because we are not retrieving attributes.
detected_faces1 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name1, detectionModel='detection_02')
# Add the returned face's face ID
source_image1_id = detected_faces1[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces1), source_image_file_name1))

# Detect face(s) from source image 2, returns a list[DetectedFaces]
detected_faces2 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name2, detectionModel='detection_02')
# Add the returned face's face ID
source_image2_id = detected_faces2[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces2), source_image_file_name2))

# List for the target face IDs (uuids)
detected_faces_ids = []
# Detect faces from target image url list, returns a list[DetectedFaces]
for image_file_name in target_image_file_names:
    # We use detection model 2 because we are not retrieving attributes.
    detected_faces = face_client.face.detect_with_url(IMAGE_BASE_URL + image_file_name, detectionModel='detection_02')
    # Add the returned face's face ID
    detected_faces_ids.append(detected_faces[0].face_id)
    print('{} face(s) detected from image {}.'.format(len(detected_faces), image_file_name))

Get verification results

The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.

# Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
# Since target faces are the same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_same = face_client.face.verify_face_to_face(source_image1_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
    .format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence)
    if verify_result_same.is_identical
    else 'Faces from {} & {} are of a different person, with confidence: {}'
        .format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence))

# Verification example for faces of different persons.
# Since target faces are same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_diff = face_client.face.verify_face_to_face(source_image2_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
    .format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence)
    if verify_result_diff.is_identical
    else 'Faces from {} & {} are of a different person, with confidence: {}'
        .format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence))

Run the application

Run your face recognition app from the application directory with the python command.

python quickstart-file.py

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your script:

# Delete the main person group.
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))
print()

Next steps

In this quickstart, you learned how to use the Face client library for Python to do basis facial recognition tasks. Next, explore the reference documentation to learn more about the library.

Get started with facial recognition using the Face client library for Go. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

Use the Face service client library for Go to:

Reference documentation | Library source code | SDK download

Prerequisites

  • The latest version of Go
  • Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • After you get a key and endpoint, create environment variables for the key and endpoint, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT, respectively.

Setting up

Create a Go project directory

In a console window (cmd, PowerShell, Terminal, Bash), create a new workspace for your Go project, named my-app, and navigate to it.

mkdir -p my-app/{src, bin, pkg}  
cd my-app

Your workspace will contain three folders:

  • src - This directory will contain source code and packages. Any packages installed with the go get command will be in this folder.
  • pkg - This directory will contain the compiled Go package objects. These files all have a .a extension.
  • bin - This directory will contain the binary executable files that are created when you run go install.

Tip

To learn more about the structure of a Go workspace, see the Go language documentation. This guide includes information for setting $GOPATH and $GOROOT.

Install the client library for Go

Next, install the client library for Go:

go get -u github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v1.0/face

or if you use dep, within your repo run:

dep ensure -add https://github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v1.0/face

Create a Go application

Next, create a file in the src directory named sample-app.go:

cd src
touch sample-app.go

Open sample-app.go in your preferred IDE or text editor. Then add the package name and import the following libraries:

package main

import (
    "encoding/json"
    "container/list"
    "context"
    "fmt"
    "github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v1.0/face"
    "github.com/Azure/go-autorest/autorest"
    "github.com/satori/go.uuid"
    "io"
    "io/ioutil"
    "log"
    "os"
    "path"
    "strconv"
    "strings"
    "time"
)

Next, you'll begin adding code to carry out different Face service operations.

Object model

The following classes and interfaces handle some of the major features of the Face service Go client library.

Name Description
BaseClient This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
Client This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFace This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face.
ListClient This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonClient This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupClient This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.
SnapshotClient This class manages the Snapshot functionality. You can use it to temporarily save all of your cloud-based Face data and migrate that data to a new Azure subscription.

Code examples

These code samples show you how to complete basic tasks using the Face service client library for Go:

Authenticate the client

Note

This quickstart assumes you've created environment variables for your Face key and endpoint, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT respectively.

Create a main function and add the following code to it to instantiate a client with your endpoint and key. You create a CognitiveServicesAuthorizer object with your key, and use it with your endpoint to create a Client object. This code also instantiates a context object, which is needed for the creation of client objects. It also defines a remote location where some of the sample images in this quickstart are found.

func main() {

    // A global context for use in all samples
    faceContext := context.Background()

    // Base url for the Verify and Large Face List examples
    const imageBaseURL = "https://csdx.blob.core.windows.net/resources/Face/Images/"

    /*
    Authenticate
    */
    // Add FACE_SUBSCRIPTION_KEY, FACE_ENDPOINT, and AZURE_SUBSCRIPTION_ID to your environment variables.
    subscriptionKey := os.Getenv("FACE_SUBSCRIPTION_KEY")
    endpoint := os.Getenv("FACE_ENDPOINT")

    // Client used for Detect Faces, Find Similar, and Verify examples.
    client := face.NewClient(endpoint)
    client.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)
    /*
    END - Authenticate
    */

Detect faces in an image

Add the following code in your main method. This code defines a remote sample image and specifies which face features to extract from the image. It also specifies which AI model to use to extract data from the detected face(s). See Specify a recognition model for information on these options. Finally, the DetectWithURL method does the face detection operation on the image and saves the results in program memory.

// Detect a face in an image that contains a single face
singleFaceImageURL := "https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg" 
singleImageURL := face.ImageURL { URL: &singleFaceImageURL } 
singleImageName := path.Base(singleFaceImageURL)
// Array types chosen for the attributes of Face
attributes := []face.AttributeType {"age", "emotion", "gender"}
returnFaceID := true
returnRecognitionModel := false
returnFaceLandmarks := false

// API call to detect faces in single-faced image, using recognition model 3
// We specify detection model 1 because we are retrieving attributes.
detectSingleFaces, dErr := client.DetectWithURL(faceContext, singleImageURL, &returnFaceID, &returnFaceLandmarks, attributes, face.Recognition03, &returnRecognitionModel, face.Detection01)
if dErr != nil { log.Fatal(dErr) }

// Dereference *[]DetectedFace, in order to loop through it.
dFaces := *detectSingleFaces.Value

Tip

You can also detect faces in a local image. See the Client methods such as DetectWithStream.

Display detected face data

The next block of code takes the first element in the array of DetectedFace objects and prints its attributes to the console. If you used an image with multiple faces, you should iterate through the array instead.

fmt.Println("Detected face in (" + singleImageName + ") with ID(s): ")
fmt.Println(dFaces[0].FaceID)
fmt.Println()
// Find/display the age and gender attributes
for _, dFace := range dFaces { 
    fmt.Println("Face attributes:")
    fmt.Printf("  Age: %.0f", *dFace.FaceAttributes.Age) 
    fmt.Println("\n  Gender: " + dFace.FaceAttributes.Gender) 
} 
// Get/display the emotion attribute
emotionStruct := *dFaces[0].FaceAttributes.Emotion
// Convert struct to a map
var emotionMap map[string]float64
result, _ := json.Marshal(emotionStruct)
json.Unmarshal(result, &emotionMap)
// Find the emotion with the highest score (confidence level). Range is 0.0 - 1.0.
var highest float64 
emotion := ""
dScore := -1.0
for name, value := range emotionMap{
    if (value > highest) {
        emotion, dScore = name, value
        highest = value
    }
}
fmt.Println("  Emotion: " + emotion + " (score: " + strconv.FormatFloat(dScore, 'f', 3, 64) + ")")

Find similar faces

The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.

Detect faces for comparison

First, save a reference to the face you detected in the Detect faces in an image section. This face will be the source.

// Select an ID in single-faced image for comparison to faces detected in group image. Used in Find Similar.
firstImageFaceID := dFaces[0].FaceID

Then enter the following code to detect a set of faces in a different image. These faces will be the target.

// Detect the faces in an image that contains multiple faces
groupImageURL := "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
groupImageName := path.Base(groupImageURL)
groupImage := face.ImageURL { URL: &groupImageURL } 

// API call to detect faces in group image, using recognition model 3. This returns a ListDetectedFace struct.
// We specify detection model 2 because we are not retrieving attributes.
detectedGroupFaces, dgErr := client.DetectWithURL(faceContext, groupImage, &returnFaceID, &returnFaceLandmarks, nil, face.Recognition03, &returnRecognitionModel, face.Detection02)
if dgErr != nil { log.Fatal(dgErr) }
fmt.Println()

// Detect faces in the group image.
// Dereference *[]DetectedFace, in order to loop through it.
dFaces2 := *detectedGroupFaces.Value
// Make slice list of UUIDs
faceIDs := make([]uuid.UUID, len(dFaces2))
fmt.Print("Detected faces in (" + groupImageName + ") with ID(s):\n")
for i, face := range dFaces2 {
    faceIDs[i] = *face.FaceID // Dereference DetectedFace.FaceID
    fmt.Println(*face.FaceID)
}

Find matches

The following code uses the FindSimilar method to find all of the target faces that match the source face.

// Add single-faced image ID to struct
findSimilarBody := face.FindSimilarRequest { FaceID: firstImageFaceID, FaceIds: &faceIDs }
// Get the list of similar faces found in the group image of previously detected faces
listSimilarFaces, sErr := client.FindSimilar(faceContext, findSimilarBody)
if sErr != nil { log.Fatal(sErr) }

// The *[]SimilarFace 
simFaces := *listSimilarFaces.Value

The following code prints the match details to the console.

// Print the details of the similar faces detected 
fmt.Print("Similar faces found in (" + groupImageName + ") with ID(s):\n")
var sScore float64
for _, face := range simFaces {
    fmt.Println(face.FaceID)
    // Confidence of the found face with range 0.0 to 1.0.
    sScore = *face.Confidence
    fmt.Println("The similarity confidence: ", strconv.FormatFloat(sScore, 'f', 3, 64))
}

Create and train a person group

To step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

This group of images contains three sets of single-face images that correspond to three different people. The code will define three PersonGroup Person objects and associate them with image files that start with woman, man, and child.

Create PersonGroup

Once you've downloaded your images, add the following code to the bottom of your main method. This code authenticates a PersonGroupClient object and then uses it to define a new PersonGroup.

// Get working directory
root, rootErr := os.Getwd()
if rootErr != nil { log.Fatal(rootErr) }

// Full path to images folder
imagePathRoot := path.Join(root+"\\images\\")

// Authenticate - Need a special person group client for your person group
personGroupClient := face.NewPersonGroupClient(endpoint)
personGroupClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)

// Create the Person Group
// Create an empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
personGroupID := "unique-person-group"
fmt.Println("Person group ID: " + personGroupID)
metadata := face.MetaDataContract { Name: &personGroupID }

// Create the person group
personGroupClient.Create(faceContext, personGroupID, metadata)

Create PersonGroup Persons

The next block of code authenticates a PersonGroupPersonClient and uses it to define three new PersonGroup Person objects. These objects each represent a single person in the set of images.

// Authenticate - Need a special person group person client for your person group person
personGroupPersonClient := face.NewPersonGroupPersonClient(endpoint)
personGroupPersonClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)

// Create each person group person for each group of images (woman, man, child)
// Define woman friend
w := "Woman"
nameWoman := face.NameAndUserDataContract { Name: &w }
// Returns a Person type
womanPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameWoman)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Woman person ID: ")
fmt.Println(womanPerson.PersonID)
// Define man friend
m := "Man"
nameMan := face.NameAndUserDataContract { Name: &m }
// Returns a Person type
manPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameMan)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Man person ID: ")
fmt.Println(manPerson.PersonID)
// Define child friend
ch := "Child"
nameChild := face.NameAndUserDataContract { Name: &ch }
// Returns a Person type
childPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameChild)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Child person ID: ")
fmt.Println(childPerson.PersonID)

Assign faces to Persons

The following code sorts the images by their prefix, detects faces, and assigns the faces to each respective PersonGroup Person object, based on the image file name.

// Detect faces and register to correct person
// Lists to hold all their person images
womanImages := list.New()
manImages := list.New()
childImages := list.New()

// Collect the local images for each person, add them to their own person group person
images, fErr := ioutil.ReadDir(imagePathRoot)
if fErr != nil { log.Fatal(fErr)}
for _, f := range images {
    path:= (imagePathRoot+f.Name())
    if strings.HasPrefix(f.Name(), "w") {
        var wfile io.ReadCloser
        wfile, err:= os.Open(path)
        if err != nil { log.Fatal(err) }
        womanImages.PushBack(wfile)
        personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *womanPerson.PersonID, wfile, "", nil, face.Detection02)
    }
    if strings.HasPrefix(f.Name(), "m") {
        var mfile io.ReadCloser
        mfile, err:= os.Open(path)
        if err != nil { log.Fatal(err) }
        manImages.PushBack(mfile)
        personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *manPerson.PersonID, mfile, "", nil, face.Detection02)
    }
    if strings.HasPrefix(f.Name(), "ch") {
        var chfile io.ReadCloser
        chfile, err:= os.Open(path)
        if err != nil { log.Fatal(err) }
        childImages.PushBack(chfile)
        personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *childPerson.PersonID, chfile, "", nil, face.Detection02)
    }
}

Tip

You can also create a PersonGroup from remote images referenced by URL. See the PersonGroupPersonClient methods such as AddFaceFromURL.

Train PersonGroup

Once you've assigned faces, you train the PersonGroup so it can identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the result, printing the status to the console.

// Train the person group
personGroupClient.Train(faceContext, personGroupID)

// Wait for it to succeed in training
for {
    trainingStatus, tErr := personGroupClient.GetTrainingStatus(faceContext, personGroupID)
    if tErr != nil { log.Fatal(tErr) }
    
    if trainingStatus.Status == "succeeded" {
        fmt.Println("Training status:", trainingStatus.Status)
        break
    }
    time.Sleep(2)
}

Tip

The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

Identify a face

The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.

Important

In order to run this example, you must first run the code in Create and train a person group.

Get a test image

The following code looks in the root of your project for an image test-image-person-group.jpg and loads it into program memory. You can find this image in the same repo as the images used in Create and train a person group: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

personGroupTestImageName := "test-image-person-group.jpg"
// Use image path root from the one created in person group
personGroupTestImagePath := imagePathRoot
var personGroupTestImage io.ReadCloser
// Returns a ReaderCloser
personGroupTestImage, identErr:= os.Open(personGroupTestImagePath+personGroupTestImageName)
if identErr != nil { log.Fatal(identErr) }

Detect source faces in test image

The next code block does ordinary face detection on the test image to retrieve all of the faces and save them to an array.

// Detect faces in group test image, using recognition model 1 (default)
returnIdentifyFaceID := true
// Returns a ListDetectedFaces
// Recognition03 is not compatible.
// We specify detection model 2 because we are not retrieving attributes.
detectedTestImageFaces, dErr := client.DetectWithStream(faceContext, personGroupTestImage, &returnIdentifyFaceID, nil, nil, face.Recognition01, nil, face.Detection02)
if dErr != nil { log.Fatal(dErr) }

// Make list of face IDs from the detection. 
length := len(*detectedTestImageFaces.Value)
testImageFaceIDs := make([]uuid.UUID, length)
// ListDetectedFace is a struct with a Value property that returns a *[]DetectedFace
for i, f := range *detectedTestImageFaces.Value {
    testImageFaceIDs[i] = *f.FaceID
}

Identify faces

The Identify method takes the array of detected faces and compares them to the given PersonGroup (defined and trained in the earlier section). If it can match a detected face to a Person in the group, it saves the result.

// Identify the faces in the test image with everyone in the person group as a query
identifyRequestBody := face.IdentifyRequest { FaceIds: &testImageFaceIDs, PersonGroupID: &personGroupID }
identifiedFaces, err := client.Identify(faceContext, identifyRequestBody)
if err != nil { log.Fatal(err) }

This code then prints detailed match results to the console.

// Get the result which person(s) were identified
iFaces := *identifiedFaces.Value
for _, person := range iFaces {
    fmt.Println("Person for face ID: " )
    fmt.Print(person.FaceID)
    fmt.Println(" is identified in " + personGroupTestImageName + ".")
}

Verify faces

The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.

The following code detects faces in two source images and then verifies each of them against a face detected from a target image.

Get test images

The following code blocks declare variables that will point to the target and source images for the verification operation.

// Create a slice list to hold the target photos of the same person
targetImageFileNames :=  make([]string, 2)
targetImageFileNames[0] = "Family1-Dad1.jpg"
targetImageFileNames[1] = "Family1-Dad2.jpg"

// The source photos contain this person, maybe
sourceImageFileName1 := "Family1-Dad3.jpg"
sourceImageFileName2 := "Family1-Son1.jpg"

Detect faces for verification

The following code detects faces in the source and target images and saves them to variables.

// DetectWithURL parameters
urlSource1 := imageBaseURL + sourceImageFileName1
urlSource2 := imageBaseURL + sourceImageFileName2
url1 :=  face.ImageURL { URL: &urlSource1 }
url2 := face.ImageURL { URL: &urlSource2 }
returnFaceIDVerify := true
returnFaceLandmarksVerify := false
returnRecognitionModelVerify := false

// Detect face(s) from source image 1, returns a ListDetectedFace struct
// We specify detection model 2 because we are not retrieving attributes.
detectedVerifyFaces1, dErrV1 := client.DetectWithURL(faceContext, url1 , &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, face.Recognition03, &returnRecognitionModelVerify, face.Detection02)
if dErrV1 != nil { log.Fatal(dErrV1) }
// Dereference the result, before getting the ID
dVFaceIds1 := *detectedVerifyFaces1.Value 
// Get ID of the detected face
imageSource1Id := dVFaceIds1[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaceIds1), sourceImageFileName1))

// Detect face(s) from source image 2, returns a ListDetectedFace struct
// We specify detection model 2 because we are not retrieving attributes.
detectedVerifyFaces2, dErrV2 := client.DetectWithURL(faceContext, url2 , &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, face.Recognition03, &returnRecognitionModelVerify, face.Detection02)
if dErrV2 != nil { log.Fatal(dErrV2) }
// Dereference the result, before getting the ID
dVFaceIds2 := *detectedVerifyFaces2.Value 
// Get ID of the detected face
imageSource2Id := dVFaceIds2[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaceIds2), sourceImageFileName2))
// Detect faces from each target image url in list. DetectWithURL returns a VerifyResult with Value of list[DetectedFaces]
// Empty slice list for the target face IDs (UUIDs)
var detectedVerifyFacesIds [2]uuid.UUID
for i, imageFileName := range targetImageFileNames {
    urlSource := imageBaseURL + imageFileName 
    url :=  face.ImageURL { URL: &urlSource}
    // We specify detection model 2 because we are not retrieving attributes.
    detectedVerifyFaces, dErrV := client.DetectWithURL(faceContext, url, &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, face.Recognition03, &returnRecognitionModelVerify, face.Detection02)
    if dErrV != nil { log.Fatal(dErrV) }
    // Dereference *[]DetectedFace from Value in order to loop through it.
    dVFaces := *detectedVerifyFaces.Value
    // Add the returned face's face ID
    detectedVerifyFacesIds[i] = *dVFaces[0].FaceID
    fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaces), imageFileName))
}

Get verification results

The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.

// Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
// Since target faces are the same person, in this example, we can use the 1st ID in the detectedVerifyFacesIds list to compare.
verifyRequestBody1 := face.VerifyFaceToFaceRequest{ FaceID1: imageSource1Id, FaceID2: &detectedVerifyFacesIds[0] }
verifyResultSame, vErrSame := client.VerifyFaceToFace(faceContext, verifyRequestBody1)
if vErrSame != nil { log.Fatal(vErrSame) }

fmt.Println()

// Check if the faces are from the same person.
if (*verifyResultSame.IsIdentical) {
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of the same person, with confidence %v", 
    sourceImageFileName1, targetImageFileNames[0], strconv.FormatFloat(*verifyResultSame.Confidence, 'f', 3, 64)))
} else {
    // Low confidence means they are more differant than same.
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of a different person, with confidence %v", 
    sourceImageFileName1, targetImageFileNames[0], strconv.FormatFloat(*verifyResultSame.Confidence, 'f', 3, 64)))
}

// Verification example for faces of different persons. 
// Since target faces are same person, in this example, we can use the 1st ID in the detectedVerifyFacesIds list to compare.
verifyRequestBody2 := face.VerifyFaceToFaceRequest{ FaceID1: imageSource2Id, FaceID2: &detectedVerifyFacesIds[0] }
verifyResultDiff, vErrDiff := client.VerifyFaceToFace(faceContext, verifyRequestBody2)
if vErrDiff != nil { log.Fatal(vErrDiff) }
// Check if the faces are from the same person.
if (*verifyResultDiff.IsIdentical) {
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of the same person, with confidence %v", 
    sourceImageFileName2, targetImageFileNames[0], strconv.FormatFloat(*verifyResultDiff.Confidence, 'f', 3, 64)))
} else {
    // Low confidence means they are more differant than same.
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of a different person, with confidence %v", 
    sourceImageFileName2, targetImageFileNames[0], strconv.FormatFloat(*verifyResultDiff.Confidence, 'f', 3, 64)))
}

Run the application

Run your face recognition app from the application directory with the go run <app-name> command.

go run sample-app.go

Clean up resources

If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

If you created a PersonGroup in this quickstart and you want to delete it, call the Delete method.

Next steps

In this quickstart, you learned how to use the Face client library for Go to do basis facial recognition tasks. Next, explore the reference documentation to learn more about the library.