Quickstart: Use the Face client library
Get started with facial recognition using the Face client library for .NET. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.
Use the Face client library for .NET to:
Reference documentation | Library source code | Package (NuGet) | Samples
Prerequisites
- Azure subscription - Create one for free
- The Visual Studio IDE or current version of .NET Core.
- Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
- You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
- You can use the free pricing tier (
F0
) to try the service, and upgrade later to a paid tier for production.
Setting up
Create a new C# application
Using Visual Studio, create a new .NET Core application.
Install the client library
Once you've created a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, check Include prerelease, and search for Microsoft.Azure.CognitiveServices.Vision.Face
. Select version 2.6.0-preview.1
, and then Install.
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
From the project directory, open the program.cs file and add the following using
directives:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
In the application's Program class, create variables for your resource's key and endpoint.
Important
Go to the Azure portal. If the Face resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.
Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services security article for more information.
// From your Face subscription in the Azure portal, get your subscription key and endpoint.
const string SUBSCRIPTION_KEY = "<your subscription key>";
const string ENDPOINT = "<your api endpoint>";
In the application's Main method, add calls for the methods used in this quickstart. You will implement these later.
// Authenticate.
IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);
// Detect - get features from faces.
DetectFaceExtract(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Find Similar - find a similar face from a list of faces.
FindSimilar(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Verify - compare two images if the same person or not.
Verify(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Identify - recognize a face(s) in a person group (a person group is created in this example).
IdentifyInPersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// LargePersonGroup - create, then get data.
LargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// Group faces - automatically group similar faces.
Group(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
// FaceList - create a face list, then get data
Object model
The following classes and interfaces handle some of the major features of the Face .NET client library:
Name | Description |
---|---|
FaceClient | This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes. |
FaceOperations | This class handles the basic detection and recognition tasks that you can do with human faces. |
DetectedFace | This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face. |
FaceListOperations | This class manages the cloud-stored FaceList constructs, which store an assorted set of faces. |
PersonGroupPersonExtensions | This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person. |
PersonGroupOperations | This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects. |
Code examples
The code snippets below show you how to do the following tasks with the Face client library for .NET:
- Authenticate the client
- Detect faces in an image
- Find similar faces
- Create a PersonGroup
- Identify a face
Authenticate the client
In a new method, instantiate a client with your endpoint and key. Create a ApiKeyServiceClientCredentials object with your key, and use it with your endpoint to create a FaceClient object.
/*
* AUTHENTICATE
* Uses subscription key and region to create a client.
*/
public static IFaceClient Authenticate(string endpoint, string key)
{
return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
}
Declare helper fields
The following fields are needed for several of the Face operations you'll add later. At the root of your Program class, define the following URL string. This URL points to a folder of sample images.
// Used for all examples.
// URL for the images.
const string IMAGE_BASE_URL = "https://csdx.blob.core.windows.net/resources/Face/Images/";
In your Main method, define strings to point to the different recognition model types. Later on, you'll be able to specify which recognition model you want to use for face detection. See Specify a recognition model for information on these options.
// Recognition model 3 was released in 2020 May.
// It is recommended since its overall accuracy is improved
// compared with models 1 and 2.
const string RECOGNITION_MODEL3 = RecognitionModel.Recognition03;
Detect faces in an image
Get detected face objects
Create a new method to detect faces. The DetectFaceExtract
method processes three of the images at the given URL and creates a list of DetectedFace objects in program memory. The list of FaceAttributeType values specifies which features to extract.
/*
* DETECT FACES
* Detects features from faces and IDs them.
*/
public static async Task DetectFaceExtract(IFaceClient client, string url, string recognitionModel)
{
Console.WriteLine("========DETECT FACES========");
Console.WriteLine();
// Create a list of images
List<string> imageFileNames = new List<string>
{
"detection1.jpg", // single female with glasses
// "detection2.jpg", // (optional: single man)
// "detection3.jpg", // (optional: single male construction worker)
// "detection4.jpg", // (optional: 3 people at cafe, 1 is blurred)
"detection5.jpg", // family, woman child man
"detection6.jpg" // elderly couple, male female
};
foreach (var imageFileName in imageFileNames)
{
IList<DetectedFace> detectedFaces;
// Detect faces with all attributes from image url.
detectedFaces = await client.Face.DetectWithUrlAsync($"{url}{imageFileName}",
returnFaceAttributes: new List<FaceAttributeType?> { FaceAttributeType.Accessories, FaceAttributeType.Age,
FaceAttributeType.Blur, FaceAttributeType.Emotion, FaceAttributeType.Exposure, FaceAttributeType.FacialHair,
FaceAttributeType.Gender, FaceAttributeType.Glasses, FaceAttributeType.Hair, FaceAttributeType.HeadPose,
FaceAttributeType.Makeup, FaceAttributeType.Noise, FaceAttributeType.Occlusion, FaceAttributeType.Smile },
// We specify detection model 1 because we are retrieving attributes.
detectionModel: DetectionModel.Detection01,
recognitionModel: recognitionModel);
Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{imageFileName}`.");
Tip
You can also detect faces in a local image. See the IFaceOperations methods such as DetectWithStreamAsync.
Display detected face data
The rest of the DetectFaceExtract
method parses and prints the attribute data for each detected face. Each attribute must be specified separately in the original face detection API call (in the FaceAttributeType list). The following code processes every attribute, but you will likely only need to use one or a few.
// Parse and print all attributes of each detected face.
foreach (var face in detectedFaces)
{
Console.WriteLine($"Face attributes for {imageFileName}:");
// Get bounding box of the faces
Console.WriteLine($"Rectangle(Left/Top/Width/Height) : {face.FaceRectangle.Left} {face.FaceRectangle.Top} {face.FaceRectangle.Width} {face.FaceRectangle.Height}");
// Get accessories of the faces
List<Accessory> accessoriesList = (List<Accessory>)face.FaceAttributes.Accessories;
int count = face.FaceAttributes.Accessories.Count;
string accessory; string[] accessoryArray = new string[count];
if (count == 0) { accessory = "NoAccessories"; }
else
{
for (int i = 0; i < count; ++i) { accessoryArray[i] = accessoriesList[i].Type.ToString(); }
accessory = string.Join(",", accessoryArray);
}
Console.WriteLine($"Accessories : {accessory}");
// Get face other attributes
Console.WriteLine($"Age : {face.FaceAttributes.Age}");
Console.WriteLine($"Blur : {face.FaceAttributes.Blur.BlurLevel}");
// Get emotion on the face
string emotionType = string.Empty;
double emotionValue = 0.0;
Emotion emotion = face.FaceAttributes.Emotion;
if (emotion.Anger > emotionValue) { emotionValue = emotion.Anger; emotionType = "Anger"; }
if (emotion.Contempt > emotionValue) { emotionValue = emotion.Contempt; emotionType = "Contempt"; }
if (emotion.Disgust > emotionValue) { emotionValue = emotion.Disgust; emotionType = "Disgust"; }
if (emotion.Fear > emotionValue) { emotionValue = emotion.Fear; emotionType = "Fear"; }
if (emotion.Happiness > emotionValue) { emotionValue = emotion.Happiness; emotionType = "Happiness"; }
if (emotion.Neutral > emotionValue) { emotionValue = emotion.Neutral; emotionType = "Neutral"; }
if (emotion.Sadness > emotionValue) { emotionValue = emotion.Sadness; emotionType = "Sadness"; }
if (emotion.Surprise > emotionValue) { emotionType = "Surprise"; }
Console.WriteLine($"Emotion : {emotionType}");
// Get more face attributes
Console.WriteLine($"Exposure : {face.FaceAttributes.Exposure.ExposureLevel}");
Console.WriteLine($"FacialHair : {string.Format("{0}", face.FaceAttributes.FacialHair.Moustache + face.FaceAttributes.FacialHair.Beard + face.FaceAttributes.FacialHair.Sideburns > 0 ? "Yes" : "No")}");
Console.WriteLine($"Gender : {face.FaceAttributes.Gender}");
Console.WriteLine($"Glasses : {face.FaceAttributes.Glasses}");
// Get hair color
Hair hair = face.FaceAttributes.Hair;
string color = null;
if (hair.HairColor.Count == 0) { if (hair.Invisible) { color = "Invisible"; } else { color = "Bald"; } }
HairColorType returnColor = HairColorType.Unknown;
double maxConfidence = 0.0f;
foreach (HairColor hairColor in hair.HairColor)
{
if (hairColor.Confidence <= maxConfidence) { continue; }
maxConfidence = hairColor.Confidence; returnColor = hairColor.Color; color = returnColor.ToString();
}
Console.WriteLine($"Hair : {color}");
// Get more attributes
Console.WriteLine($"HeadPose : {string.Format("Pitch: {0}, Roll: {1}, Yaw: {2}", Math.Round(face.FaceAttributes.HeadPose.Pitch, 2), Math.Round(face.FaceAttributes.HeadPose.Roll, 2), Math.Round(face.FaceAttributes.HeadPose.Yaw, 2))}");
Console.WriteLine($"Makeup : {string.Format("{0}", (face.FaceAttributes.Makeup.EyeMakeup || face.FaceAttributes.Makeup.LipMakeup) ? "Yes" : "No")}");
Console.WriteLine($"Noise : {face.FaceAttributes.Noise.NoiseLevel}");
Console.WriteLine($"Occlusion : {string.Format("EyeOccluded: {0}", face.FaceAttributes.Occlusion.EyeOccluded ? "Yes" : "No")} " +
$" {string.Format("ForeheadOccluded: {0}", face.FaceAttributes.Occlusion.ForeheadOccluded ? "Yes" : "No")} {string.Format("MouthOccluded: {0}", face.FaceAttributes.Occlusion.MouthOccluded ? "Yes" : "No")}");
Console.WriteLine($"Smile : {face.FaceAttributes.Smile}");
Console.WriteLine();
}
}
}
Find similar faces
The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.
Detect faces for comparison
First, define a second face detection method. You need to detect faces in images before you can compare them, and this detection method is optimized for comparison operations. It doesn't extract detailed face attributes like in the section above, and it uses a different recognition model.
private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string recognition_model)
{
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
// We use detection model 2 because we are not retrieving attributes.
IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: recognition_model, detectionModel: DetectionModel.Detection02);
Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}`");
return detectedFaces.ToList();
}
Find matches
The following method detects faces in a set of target images and in a single source image. Then, it compares them and finds all the target images that are similar to the source image.
/*
* FIND SIMILAR
* This example will take an image and find a similar one to it in another image.
*/
public static async Task FindSimilar(IFaceClient client, string url, string recognition_model)
{
Console.WriteLine("========FIND SIMILAR========");
Console.WriteLine();
List<string> targetImageFileNames = new List<string>
{
"Family1-Dad1.jpg",
"Family1-Daughter1.jpg",
"Family1-Mom1.jpg",
"Family1-Son1.jpg",
"Family2-Lady1.jpg",
"Family2-Man1.jpg",
"Family3-Lady1.jpg",
"Family3-Man1.jpg"
};
string sourceImageFileName = "findsimilar.jpg";
IList<Guid?> targetFaceIds = new List<Guid?>();
foreach (var targetImageFileName in targetImageFileNames)
{
// Detect faces from target image url.
var faces = await DetectFaceRecognize(client, $"{url}{targetImageFileName}", recognition_model);
// Add detected faceId to list of GUIDs.
targetFaceIds.Add(faces[0].FaceId.Value);
}
// Detect faces from source image url.
IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognition_model);
Console.WriteLine();
// Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);
Print matches
The following code prints the match details to the console:
foreach (var similarResult in similarResults)
{
Console.WriteLine($"Faces from {sourceImageFileName} & ID:{similarResult.FaceId} are similar with confidence: {similarResult.Confidence}.");
}
Console.WriteLine();
Identify a face
The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known. In order to do the Identify operation, you first need to create and train a PersonGroup
Create a PersonGroup
The following code creates a PersonGroup with six different Person objects. It associates each Person with a set of example images, and then it trains to recognize each person by their facial characteristics. Person and PersonGroup objects are used in the Verify, Identify, and Group operations.
Declare a string variable at the root of your class to represent the ID of the PersonGroup you'll create.
static string personGroupId = Guid.NewGuid().ToString();
In a new method, add the following code. This method will carry out the Identify operation. The first block of code associates the names of persons with their example images.
public static async Task IdentifyInPersonGroup(IFaceClient client, string url, string recognitionModel)
{
Console.WriteLine("========IDENTIFY FACES========");
Console.WriteLine();
// Create a dictionary for all your images, grouping similar ones under the same key.
Dictionary<string, string[]> personDictionary =
new Dictionary<string, string[]>
{ { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
{ "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
{ "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } },
{ "Family1-Daughter", new[] { "Family1-Daughter1.jpg", "Family1-Daughter2.jpg" } },
{ "Family2-Lady", new[] { "Family2-Lady1.jpg", "Family2-Lady2.jpg" } },
{ "Family2-Man", new[] { "Family2-Man1.jpg", "Family2-Man2.jpg" } }
};
// A group photo that includes some of the persons you seek to identify from your dictionary.
string sourceImageFileName = "identification1.jpg";
Notice that this code defines a variable sourceImageFileName
. This variable corresponds to the source image—the image that contains people to identify.
Next, add the following code to create a Person object for each person in the Dictionary and add the face data from the appropriate images. Each Person object is associated with the same PersonGroup through its unique ID string. Remember to pass the variables client
, url
, and RECOGNITION_MODEL1
into this method.
// Create a person group.
Console.WriteLine($"Create a person group ({personGroupId}).");
await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
// The similar faces will be grouped into a single person group person.
foreach (var groupedFace in personDictionary.Keys)
{
// Limit TPS
await Task.Delay(250);
Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
Console.WriteLine($"Create a person group person '{groupedFace}'.");
// Add face to the person group person.
foreach (var similarImage in personDictionary[groupedFace])
{
Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
$"{url}{similarImage}", similarImage);
}
}
Tip
You can also create a PersonGroup from local images. See the IPersonGroupPerson methods such as AddFaceFromStreamAsync.
Train the PersonGroup
Once you've extracted face data from your images and sorted it into different Person objects, you must train the PersonGroup to identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the results, printing the status to the console.
// Start to train the person group.
Console.WriteLine();
Console.WriteLine($"Train person group {personGroupId}.");
await client.PersonGroup.TrainAsync(personGroupId);
// Wait until the training is completed.
while (true)
{
await Task.Delay(1000);
var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
Console.WriteLine($"Training status: {trainingStatus.Status}.");
if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
}
Console.WriteLine();
Tip
The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.
This Person group and its associated Person objects are now ready to be used in the Verify, Identify, or Group operations.
Identify faces
The following code takes the source image and creates a list of all the faces detected in the image. These are the faces that will be identified against the PersonGroup.
List<Guid?> sourceFaceIds = new List<Guid?>();
// Detect faces from source image url.
List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
// Add detected faceId to sourceFaceIds.
foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
The next code snippet calls the IdentifyAsync operation and prints the results to the console. Here, the service attempts to match each face from the source image to a Person in the given PersonGroup. This closes out your Identify method.
// Identify the faces in a person group.
var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);
foreach (var identifyResult in identifyResults)
{
Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates[0].PersonId);
Console.WriteLine($"Person '{person.Name}' is identified for face in: {sourceImageFileName} - {identifyResult.FaceId}," +
$" confidence: {identifyResult.Candidates[0].Confidence}.");
}
Console.WriteLine();
}
Run the application
Run the application by clicking the Debug button at the top of the IDE window.
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your program:
// At end, delete person groups in both regions (since testing only)
Console.WriteLine("========DELETE PERSON GROUP========");
Console.WriteLine();
DeletePersonGroup(client, personGroupId).Wait();
Define the deletion method with the following code:
/*
* DELETE PERSON GROUP
* After this entire example is executed, delete the person group in your Azure account,
* otherwise you cannot recreate one with the same name (if running example repeatedly).
*/
public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
{
await client.PersonGroup.DeleteAsync(personGroupId);
Console.WriteLine($"Deleted the person group {personGroupId}.");
}
Next steps
In this quickstart, you learned how to use the Face client library for .NET to do basic facial recognition tasks. Next, explore the reference documentation to learn more about the library.
- What is the Face service?
- The source code for this sample can be found on GitHub.
Get started with facial recognition using the Face client library for Go. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.
Use the Face service client library for Go to:
Reference documentation | Library source code | SDK download
Prerequisites
- The latest version of Go
- Azure subscription - Create one for free
- Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
- You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
- You can use the free pricing tier (
F0
) to try the service, and upgrade later to a paid tier for production.
- After you get a key and endpoint, create environment variables for the key and endpoint, named
FACE_SUBSCRIPTION_KEY
andFACE_ENDPOINT
, respectively.
Setting up
Create a Go project directory
In a console window (cmd, PowerShell, Terminal, Bash), create a new workspace for your Go project, named my-app
, and navigate to it.
mkdir -p my-app/{src, bin, pkg}
cd my-app
Your workspace will contain three folders:
- src - This directory will contain source code and packages. Any packages installed with the
go get
command will be in this folder. - pkg - This directory will contain the compiled Go package objects. These files all have a
.a
extension. - bin - This directory will contain the binary executable files that are created when you run
go install
.
Tip
To learn more about the structure of a Go workspace, see the Go language documentation. This guide includes information for setting $GOPATH
and $GOROOT
.
Install the client library for Go
Next, install the client library for Go:
go get -u github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v1.0/face
or if you use dep, within your repo run:
dep ensure -add https://github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v1.0/face
Create a Go application
Next, create a file in the src directory named sample-app.go
:
cd src
touch sample-app.go
Open sample-app.go
in your preferred IDE or text editor. Then add the package name and import the following libraries:
package main
import (
"encoding/json"
"container/list"
"context"
"fmt"
"github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v1.0/face"
"github.com/Azure/go-autorest/autorest"
"github.com/satori/go.uuid"
"io"
"io/ioutil"
"log"
"os"
"path"
"strconv"
"strings"
"time"
)
Next, you'll begin adding code to carry out different Face service operations.
Object model
The following classes and interfaces handle some of the major features of the Face service Go client library.
Name | Description |
---|---|
BaseClient | This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes. |
Client | This class handles the basic detection and recognition tasks that you can do with human faces. |
DetectedFace | This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face. |
ListClient | This class manages the cloud-stored FaceList constructs, which store an assorted set of faces. |
PersonGroupPersonClient | This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person. |
PersonGroupClient | This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects. |
SnapshotClient | This class manages the Snapshot functionality. You can use it to temporarily save all of your cloud-based Face data and migrate that data to a new Azure subscription. |
Code examples
These code samples show you how to complete basic tasks using the Face service client library for Go:
- Authenticate the client
- Detect faces in an image
- Find similar faces
- Create and train a PersonGroup
- Identify a face
Authenticate the client
Note
This quickstart assumes you've created environment variables for your Face key and endpoint, named FACE_SUBSCRIPTION_KEY
and FACE_ENDPOINT
respectively.
Create a main function and add the following code to it to instantiate a client with your endpoint and key. You create a CognitiveServicesAuthorizer object with your key, and use it with your endpoint to create a Client object. This code also instantiates a context object, which is needed for the creation of client objects. It also defines a remote location where some of the sample images in this quickstart are found.
func main() {
// A global context for use in all samples
faceContext := context.Background()
// Base url for the Verify and Large Face List examples
const imageBaseURL = "https://csdx.blob.core.windows.net/resources/Face/Images/"
/*
Authenticate
*/
// Add FACE_SUBSCRIPTION_KEY, FACE_ENDPOINT, and AZURE_SUBSCRIPTION_ID to your environment variables.
subscriptionKey := os.Getenv("FACE_SUBSCRIPTION_KEY")
endpoint := os.Getenv("FACE_ENDPOINT")
// Client used for Detect Faces, Find Similar, and Verify examples.
client := face.NewClient(endpoint)
client.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)
/*
END - Authenticate
*/
Detect faces in an image
Add the following code in your main method. This code defines a remote sample image and specifies which face features to extract from the image. It also specifies which AI model to use to extract data from the detected face(s). See Specify a recognition model for information on these options. Finally, the DetectWithURL method does the face detection operation on the image and saves the results in program memory.
// Detect a face in an image that contains a single face
singleFaceImageURL := "https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg"
singleImageURL := face.ImageURL { URL: &singleFaceImageURL }
singleImageName := path.Base(singleFaceImageURL)
// Array types chosen for the attributes of Face
attributes := []face.AttributeType {"age", "emotion", "gender"}
returnFaceID := true
returnRecognitionModel := false
returnFaceLandmarks := false
// API call to detect faces in single-faced image, using recognition model 3
// We specify detection model 1 because we are retrieving attributes.
detectSingleFaces, dErr := client.DetectWithURL(faceContext, singleImageURL, &returnFaceID, &returnFaceLandmarks, attributes, face.Recognition03, &returnRecognitionModel, face.Detection01)
if dErr != nil { log.Fatal(dErr) }
// Dereference *[]DetectedFace, in order to loop through it.
dFaces := *detectSingleFaces.Value
Tip
You can also detect faces in a local image. See the Client methods such as DetectWithStream.
Display detected face data
The next block of code takes the first element in the array of DetectedFace objects and prints its attributes to the console. If you used an image with multiple faces, you should iterate through the array instead.
fmt.Println("Detected face in (" + singleImageName + ") with ID(s): ")
fmt.Println(dFaces[0].FaceID)
fmt.Println()
// Find/display the age and gender attributes
for _, dFace := range dFaces {
fmt.Println("Face attributes:")
fmt.Printf(" Age: %.0f", *dFace.FaceAttributes.Age)
fmt.Println("\n Gender: " + dFace.FaceAttributes.Gender)
}
// Get/display the emotion attribute
emotionStruct := *dFaces[0].FaceAttributes.Emotion
// Convert struct to a map
var emotionMap map[string]float64
result, _ := json.Marshal(emotionStruct)
json.Unmarshal(result, &emotionMap)
// Find the emotion with the highest score (confidence level). Range is 0.0 - 1.0.
var highest float64
emotion := ""
dScore := -1.0
for name, value := range emotionMap{
if (value > highest) {
emotion, dScore = name, value
highest = value
}
}
fmt.Println(" Emotion: " + emotion + " (score: " + strconv.FormatFloat(dScore, 'f', 3, 64) + ")")
Find similar faces
The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.
Detect faces for comparison
First, save a reference to the face you detected in the Detect faces in an image section. This face will be the source.
// Select an ID in single-faced image for comparison to faces detected in group image. Used in Find Similar.
firstImageFaceID := dFaces[0].FaceID
Then enter the following code to detect a set of faces in a different image. These faces will be the target.
// Detect the faces in an image that contains multiple faces
groupImageURL := "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
groupImageName := path.Base(groupImageURL)
groupImage := face.ImageURL { URL: &groupImageURL }
// API call to detect faces in group image, using recognition model 3. This returns a ListDetectedFace struct.
// We specify detection model 2 because we are not retrieving attributes.
detectedGroupFaces, dgErr := client.DetectWithURL(faceContext, groupImage, &returnFaceID, &returnFaceLandmarks, nil, face.Recognition03, &returnRecognitionModel, face.Detection02)
if dgErr != nil { log.Fatal(dgErr) }
fmt.Println()
// Detect faces in the group image.
// Dereference *[]DetectedFace, in order to loop through it.
dFaces2 := *detectedGroupFaces.Value
// Make slice list of UUIDs
faceIDs := make([]uuid.UUID, len(dFaces2))
fmt.Print("Detected faces in (" + groupImageName + ") with ID(s):\n")
for i, face := range dFaces2 {
faceIDs[i] = *face.FaceID // Dereference DetectedFace.FaceID
fmt.Println(*face.FaceID)
}
Find matches
The following code uses the FindSimilar method to find all of the target faces that match the source face.
// Add single-faced image ID to struct
findSimilarBody := face.FindSimilarRequest { FaceID: firstImageFaceID, FaceIds: &faceIDs }
// Get the list of similar faces found in the group image of previously detected faces
listSimilarFaces, sErr := client.FindSimilar(faceContext, findSimilarBody)
if sErr != nil { log.Fatal(sErr) }
// The *[]SimilarFace
simFaces := *listSimilarFaces.Value
Print matches
The following code prints the match details to the console.
// Print the details of the similar faces detected
fmt.Print("Similar faces found in (" + groupImageName + ") with ID(s):\n")
var sScore float64
for _, face := range simFaces {
fmt.Println(face.FaceID)
// Confidence of the found face with range 0.0 to 1.0.
sScore = *face.Confidence
fmt.Println("The similarity confidence: ", strconv.FormatFloat(sScore, 'f', 3, 64))
}
Create and train a PersonGroup
To step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.
This group of images contains three sets of single-face images that correspond to three different people. The code will define three PersonGroup Person objects and associate them with image files that start with woman
, man
, and child
.
Create PersonGroup
Once you've downloaded your images, add the following code to the bottom of your main method. This code authenticates a PersonGroupClient object and then uses it to define a new PersonGroup.
// Get working directory
root, rootErr := os.Getwd()
if rootErr != nil { log.Fatal(rootErr) }
// Full path to images folder
imagePathRoot := path.Join(root+"\\images\\")
// Authenticate - Need a special person group client for your person group
personGroupClient := face.NewPersonGroupClient(endpoint)
personGroupClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)
// Create the Person Group
// Create an empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
personGroupID := "unique-person-group"
fmt.Println("Person group ID: " + personGroupID)
metadata := face.MetaDataContract { Name: &personGroupID }
// Create the person group
personGroupClient.Create(faceContext, personGroupID, metadata)
Create PersonGroup Persons
The next block of code authenticates a PersonGroupPersonClient and uses it to define three new PersonGroup Person objects. These objects each represent a single person in the set of images.
// Authenticate - Need a special person group person client for your person group person
personGroupPersonClient := face.NewPersonGroupPersonClient(endpoint)
personGroupPersonClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)
// Create each person group person for each group of images (woman, man, child)
// Define woman friend
w := "Woman"
nameWoman := face.NameAndUserDataContract { Name: &w }
// Returns a Person type
womanPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameWoman)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Woman person ID: ")
fmt.Println(womanPerson.PersonID)
// Define man friend
m := "Man"
nameMan := face.NameAndUserDataContract { Name: &m }
// Returns a Person type
manPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameMan)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Man person ID: ")
fmt.Println(manPerson.PersonID)
// Define child friend
ch := "Child"
nameChild := face.NameAndUserDataContract { Name: &ch }
// Returns a Person type
childPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameChild)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Child person ID: ")
fmt.Println(childPerson.PersonID)
Assign faces to Persons
The following code sorts the images by their prefix, detects faces, and assigns the faces to each respective PersonGroup Person object, based on the image file name.
// Detect faces and register to correct person
// Lists to hold all their person images
womanImages := list.New()
manImages := list.New()
childImages := list.New()
// Collect the local images for each person, add them to their own person group person
images, fErr := ioutil.ReadDir(imagePathRoot)
if fErr != nil { log.Fatal(fErr)}
for _, f := range images {
path:= (imagePathRoot+f.Name())
if strings.HasPrefix(f.Name(), "w") {
var wfile io.ReadCloser
wfile, err:= os.Open(path)
if err != nil { log.Fatal(err) }
womanImages.PushBack(wfile)
personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *womanPerson.PersonID, wfile, "", nil, face.Detection02)
}
if strings.HasPrefix(f.Name(), "m") {
var mfile io.ReadCloser
mfile, err:= os.Open(path)
if err != nil { log.Fatal(err) }
manImages.PushBack(mfile)
personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *manPerson.PersonID, mfile, "", nil, face.Detection02)
}
if strings.HasPrefix(f.Name(), "ch") {
var chfile io.ReadCloser
chfile, err:= os.Open(path)
if err != nil { log.Fatal(err) }
childImages.PushBack(chfile)
personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *childPerson.PersonID, chfile, "", nil, face.Detection02)
}
}
Tip
You can also create a PersonGroup from remote images referenced by URL. See the PersonGroupPersonClient methods such as AddFaceFromURL.
Train PersonGroup
Once you've assigned faces, you train the PersonGroup so it can identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the result, printing the status to the console.
// Train the person group
personGroupClient.Train(faceContext, personGroupID)
// Wait for it to succeed in training
for {
trainingStatus, tErr := personGroupClient.GetTrainingStatus(faceContext, personGroupID)
if tErr != nil { log.Fatal(tErr) }
if trainingStatus.Status == "succeeded" {
fmt.Println("Training status:", trainingStatus.Status)
break
}
time.Sleep(2)
}
Tip
The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.
Identify a face
The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.
Important
In order to run this example, you must first run the code in Create and train a PersonGroup.
Get a test image
The following code looks in the root of your project for an image test-image-person-group.jpg and loads it into program memory. You can find this image in the same repo as the images used in Create and train a PersonGroup: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.
personGroupTestImageName := "test-image-person-group.jpg"
// Use image path root from the one created in person group
personGroupTestImagePath := imagePathRoot
var personGroupTestImage io.ReadCloser
// Returns a ReaderCloser
personGroupTestImage, identErr:= os.Open(personGroupTestImagePath+personGroupTestImageName)
if identErr != nil { log.Fatal(identErr) }
Detect source faces in test image
The next code block does ordinary face detection on the test image to retrieve all of the faces and save them to an array.
// Detect faces in group test image, using recognition model 1 (default)
returnIdentifyFaceID := true
// Returns a ListDetectedFaces
// Recognition03 is not compatible.
// We specify detection model 2 because we are not retrieving attributes.
detectedTestImageFaces, dErr := client.DetectWithStream(faceContext, personGroupTestImage, &returnIdentifyFaceID, nil, nil, face.Recognition01, nil, face.Detection02)
if dErr != nil { log.Fatal(dErr) }
// Make list of face IDs from the detection.
length := len(*detectedTestImageFaces.Value)
testImageFaceIDs := make([]uuid.UUID, length)
// ListDetectedFace is a struct with a Value property that returns a *[]DetectedFace
for i, f := range *detectedTestImageFaces.Value {
testImageFaceIDs[i] = *f.FaceID
}
Identify faces
The Identify method takes the array of detected faces and compares them to the given PersonGroup (defined and trained in the earlier section). If it can match a detected face to a Person in the group, it saves the result.
// Identify the faces in the test image with everyone in the person group as a query
identifyRequestBody := face.IdentifyRequest { FaceIds: &testImageFaceIDs, PersonGroupID: &personGroupID }
identifiedFaces, err := client.Identify(faceContext, identifyRequestBody)
if err != nil { log.Fatal(err) }
This code then prints detailed match results to the console.
// Get the result which person(s) were identified
iFaces := *identifiedFaces.Value
for _, person := range iFaces {
fmt.Println("Person for face ID: " )
fmt.Print(person.FaceID)
fmt.Println(" is identified in " + personGroupTestImageName + ".")
}
Verify faces
The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.
The following code detects faces in two source images and then verifies each of them against a face detected from a target image.
Get test images
The following code blocks declare variables that will point to the target and source images for the verification operation.
// Create a slice list to hold the target photos of the same person
targetImageFileNames := make([]string, 2)
targetImageFileNames[0] = "Family1-Dad1.jpg"
targetImageFileNames[1] = "Family1-Dad2.jpg"
// The source photos contain this person, maybe
sourceImageFileName1 := "Family1-Dad3.jpg"
sourceImageFileName2 := "Family1-Son1.jpg"
Detect faces for verification
The following code detects faces in the source and target images and saves them to variables.
// DetectWithURL parameters
urlSource1 := imageBaseURL + sourceImageFileName1
urlSource2 := imageBaseURL + sourceImageFileName2
url1 := face.ImageURL { URL: &urlSource1 }
url2 := face.ImageURL { URL: &urlSource2 }
returnFaceIDVerify := true
returnFaceLandmarksVerify := false
returnRecognitionModelVerify := false
// Detect face(s) from source image 1, returns a ListDetectedFace struct
// We specify detection model 2 because we are not retrieving attributes.
detectedVerifyFaces1, dErrV1 := client.DetectWithURL(faceContext, url1 , &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, face.Recognition03, &returnRecognitionModelVerify, face.Detection02)
if dErrV1 != nil { log.Fatal(dErrV1) }
// Dereference the result, before getting the ID
dVFaceIds1 := *detectedVerifyFaces1.Value
// Get ID of the detected face
imageSource1Id := dVFaceIds1[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaceIds1), sourceImageFileName1))
// Detect face(s) from source image 2, returns a ListDetectedFace struct
// We specify detection model 2 because we are not retrieving attributes.
detectedVerifyFaces2, dErrV2 := client.DetectWithURL(faceContext, url2 , &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, face.Recognition03, &returnRecognitionModelVerify, face.Detection02)
if dErrV2 != nil { log.Fatal(dErrV2) }
// Dereference the result, before getting the ID
dVFaceIds2 := *detectedVerifyFaces2.Value
// Get ID of the detected face
imageSource2Id := dVFaceIds2[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaceIds2), sourceImageFileName2))
// Detect faces from each target image url in list. DetectWithURL returns a VerifyResult with Value of list[DetectedFaces]
// Empty slice list for the target face IDs (UUIDs)
var detectedVerifyFacesIds [2]uuid.UUID
for i, imageFileName := range targetImageFileNames {
urlSource := imageBaseURL + imageFileName
url := face.ImageURL { URL: &urlSource}
// We specify detection model 2 because we are not retrieving attributes.
detectedVerifyFaces, dErrV := client.DetectWithURL(faceContext, url, &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, face.Recognition03, &returnRecognitionModelVerify, face.Detection02)
if dErrV != nil { log.Fatal(dErrV) }
// Dereference *[]DetectedFace from Value in order to loop through it.
dVFaces := *detectedVerifyFaces.Value
// Add the returned face's face ID
detectedVerifyFacesIds[i] = *dVFaces[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaces), imageFileName))
}
Get verification results
The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.
// Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
// Since target faces are the same person, in this example, we can use the 1st ID in the detectedVerifyFacesIds list to compare.
verifyRequestBody1 := face.VerifyFaceToFaceRequest{ FaceID1: imageSource1Id, FaceID2: &detectedVerifyFacesIds[0] }
verifyResultSame, vErrSame := client.VerifyFaceToFace(faceContext, verifyRequestBody1)
if vErrSame != nil { log.Fatal(vErrSame) }
fmt.Println()
// Check if the faces are from the same person.
if (*verifyResultSame.IsIdentical) {
fmt.Println(fmt.Sprintf("Faces from %v & %v are of the same person, with confidence %v",
sourceImageFileName1, targetImageFileNames[0], strconv.FormatFloat(*verifyResultSame.Confidence, 'f', 3, 64)))
} else {
// Low confidence means they are more differant than same.
fmt.Println(fmt.Sprintf("Faces from %v & %v are of a different person, with confidence %v",
sourceImageFileName1, targetImageFileNames[0], strconv.FormatFloat(*verifyResultSame.Confidence, 'f', 3, 64)))
}
// Verification example for faces of different persons.
// Since target faces are same person, in this example, we can use the 1st ID in the detectedVerifyFacesIds list to compare.
verifyRequestBody2 := face.VerifyFaceToFaceRequest{ FaceID1: imageSource2Id, FaceID2: &detectedVerifyFacesIds[0] }
verifyResultDiff, vErrDiff := client.VerifyFaceToFace(faceContext, verifyRequestBody2)
if vErrDiff != nil { log.Fatal(vErrDiff) }
// Check if the faces are from the same person.
if (*verifyResultDiff.IsIdentical) {
fmt.Println(fmt.Sprintf("Faces from %v & %v are of the same person, with confidence %v",
sourceImageFileName2, targetImageFileNames[0], strconv.FormatFloat(*verifyResultDiff.Confidence, 'f', 3, 64)))
} else {
// Low confidence means they are more differant than same.
fmt.Println(fmt.Sprintf("Faces from %v & %v are of a different person, with confidence %v",
sourceImageFileName2, targetImageFileNames[0], strconv.FormatFloat(*verifyResultDiff.Confidence, 'f', 3, 64)))
}
Run the application
Run your face recognition app from the application directory with the go run <app-name>
command.
go run sample-app.go
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
If you created a PersonGroup in this quickstart and you want to delete it, call the Delete method.
Next steps
In this quickstart, you learned how to use the Face client library for Go to do basis facial recognition tasks. Next, explore the reference documentation to learn more about the library.
- What is the Face service?
- The source code for this sample can be found on GitHub.
Quickstart: Face client library for JavaScript
Get started with facial recognition using the Face client library for JavaScript. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.
Use the Face client library for JavaScript to:
Reference documentation | Library source code | Package (npm) | Samples
Prerequisites
- Azure subscription - Create one for free
- The latest version of Node.js
- Once you have your Azure subscription, Create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
- You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
- You can use the free pricing tier (
F0
) to try the service, and upgrade later to a paid tier for production.
Setting up
Create a new Node.js application
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the npm init
command to create a node application with a package.json
file.
npm init
Install the client library
Install the ms-rest-azure
and azure-cognitiveservices-face
NPM packages:
npm install @azure/cognitiveservices-face @azure/ms-rest-js
Your app's package.json
file will be updated with the dependencies.
Create a file named index.js
and import the following libraries:
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
const msRest = require("@azure/ms-rest-js");
const Face = require("@azure/cognitiveservices-face");
const uuid = require("uuid/v4");
Create variables for your resource's Azure endpoint and key.
Important
Go to the Azure portal. If the Face resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.
Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services security article for more information.
key = "<paste-your-face-key-here>"
endpoint = "<paste-your-face-endpoint-here>"
Object model
The following classes and interfaces handle some of the major features of the Face .NET client library:
Name | Description |
---|---|
FaceClient | This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes. |
Face | This class handles the basic detection and recognition tasks that you can do with human faces. |
DetectedFace | This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face. |
FaceList | This class manages the cloud-stored FaceList constructs, which store an assorted set of faces. |
PersonGroupPerson | This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person. |
PersonGroup | This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects. |
Code examples
The code snippets below show you how to do the following tasks with the Face client library for .NET:
- Authenticate the client
- Detect faces in an image
- Find similar faces
- Create a PersonGroup
- Identify a face
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
Authenticate the client
Instantiate a client with your endpoint and key. Create a ApiKeyCredentials object with your key, and use it with your endpoint to create a FaceClient object.
const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
const client = new Face.FaceClient(credentials, endpoint);
Declare global values and helper function
The following global values are needed for several of the Face operations you'll add later.
The URL points to a folder of sample images. The UUID will serve as both the name and ID for the PersonGroup you will create.
const image_base_url = "https://csdx.blob.core.windows.net/resources/Face/Images/";
const person_group_id = uuid();
You'll use the following function to wait for the training of the PersonGroup to complete.
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
Detect faces in an image
Get detected face objects
Create a new method to detect faces. The DetectFaceExtract
method processes three of the images at the given URL and creates a list of DetectedFace objects in program memory. The list of FaceAttributeType values specifies which features to extract.
The DetectFaceExtract
method then parses and prints the attribute data for each detected face. Each attribute must be specified separately in the original face detection API call (in the FaceAttributeType list). The following code processes every attribute, but you will likely only need to use one or a few.
async function DetectFaceExtract() {
console.log("========DETECT FACES========");
console.log();
// Create a list of images
const image_file_names = [
"detection1.jpg", // single female with glasses
// "detection2.jpg", // (optional: single man)
// "detection3.jpg", // (optional: single male construction worker)
// "detection4.jpg", // (optional: 3 people at cafe, 1 is blurred)
"detection5.jpg", // family, woman child man
"detection6.jpg" // elderly couple, male female
];
// NOTE await does not work properly in for, forEach, and while loops. Use Array.map and Promise.all instead.
await Promise.all (image_file_names.map (async function (image_file_name) {
let detected_faces = await client.face.detectWithUrl(image_base_url + image_file_name,
{
returnFaceAttributes: ["Accessories","Age","Blur","Emotion","Exposure","FacialHair","Gender","Glasses","Hair","HeadPose","Makeup","Noise","Occlusion","Smile"],
// We specify detection model 1 because we are retrieving attributes.
detectionModel: "detection_01"
});
console.log (detected_faces.length + " face(s) detected from image " + image_file_name + ".");
console.log("Face attributes for face(s) in " + image_file_name + ":");
// Parse and print all attributes of each detected face.
detected_faces.forEach (async function (face) {
// Get the bounding box of the face
console.log("Bounding box:\n Left: " + face.faceRectangle.left + "\n Top: " + face.faceRectangle.top + "\n Width: " + face.faceRectangle.width + "\n Height: " + face.faceRectangle.height);
// Get the accessories of the face
let accessories = face.faceAttributes.accessories.join();
if (0 === accessories.length) {
console.log ("No accessories detected.");
}
else {
console.log ("Accessories: " + accessories);
}
// Get face other attributes
console.log("Age: " + face.faceAttributes.age);
console.log("Blur: " + face.faceAttributes.blur.blurLevel);
// Get emotion on the face
let emotions = "";
let emotion_threshold = 0.0;
if (face.faceAttributes.emotion.anger > emotion_threshold) { emotions += "anger, "; }
if (face.faceAttributes.emotion.contempt > emotion_threshold) { emotions += "contempt, "; }
if (face.faceAttributes.emotion.disgust > emotion_threshold) { emotions += "disgust, "; }
if (face.faceAttributes.emotion.fear > emotion_threshold) { emotions += "fear, "; }
if (face.faceAttributes.emotion.happiness > emotion_threshold) { emotions += "happiness, "; }
if (face.faceAttributes.emotion.neutral > emotion_threshold) { emotions += "neutral, "; }
if (face.faceAttributes.emotion.sadness > emotion_threshold) { emotions += "sadness, "; }
if (face.faceAttributes.emotion.surprise > emotion_threshold) { emotions += "surprise, "; }
if (emotions.length > 0) {
console.log ("Emotions: " + emotions.slice (0, -2));
}
else {
console.log ("No emotions detected.");
}
// Get more face attributes
console.log("Exposure: " + face.faceAttributes.exposure.exposureLevel);
if (face.faceAttributes.facialHair.moustache + face.faceAttributes.facialHair.beard + face.faceAttributes.facialHair.sideburns > 0) {
console.log("FacialHair: Yes");
}
else {
console.log("FacialHair: No");
}
console.log("Gender: " + face.faceAttributes.gender);
console.log("Glasses: " + face.faceAttributes.glasses);
// Get hair color
var color = "";
if (face.faceAttributes.hair.hairColor.length === 0) {
if (face.faceAttributes.hair.invisible) { color = "Invisible"; } else { color = "Bald"; }
}
else {
color = "Unknown";
var highest_confidence = 0.0;
face.faceAttributes.hair.hairColor.forEach (function (hair_color) {
if (hair_color.confidence > highest_confidence) {
highest_confidence = hair_color.confidence;
color = hair_color.color;
}
});
}
console.log("Hair: " + color);
// Get more attributes
console.log("Head pose:");
console.log(" Pitch: " + face.faceAttributes.headPose.pitch);
console.log(" Roll: " + face.faceAttributes.headPose.roll);
console.log(" Yaw: " + face.faceAttributes.headPose.yaw);
console.log("Makeup: " + ((face.faceAttributes.makeup.eyeMakeup || face.faceAttributes.makeup.lipMakeup) ? "Yes" : "No"));
console.log("Noise: " + face.faceAttributes.noise.noiseLevel);
console.log("Occlusion:");
console.log(" Eye occluded: " + (face.faceAttributes.occlusion.eyeOccluded ? "Yes" : "No"));
console.log(" Forehead occluded: " + (face.faceAttributes.occlusion.foreheadOccluded ? "Yes" : "No"));
console.log(" Mouth occluded: " + (face.faceAttributes.occlusion.mouthOccluded ? "Yes" : "No"));
console.log("Smile: " + face.faceAttributes.smile);
console.log();
});
}));
}
Tip
You can also detect faces in a local image. See the Face methods such as DetectWithStreamAsync.
Find similar faces
The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.
Detect faces for comparison
First, define a second face detection method. You need to detect faces in images before you can compare them, and this detection method is optimized for comparison operations. It doesn't extract detailed face attributes like in the section above, and it uses a different recognition model.
async function DetectFaceRecognize(url) {
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
// We use detection model 2 because we are not retrieving attributes.
let detected_faces = await client.face.detectWithUrl(url,
{
detectionModel: "detection_02",
recognitionModel: "recognition_03"
});
return detected_faces;
}
Find matches
The following method detects faces in a set of target images and in a single source image. Then, it compares them and finds all the target images that are similar to the source image. Finally, it prints the match details to the console.
async function FindSimilar() {
console.log("========FIND SIMILAR========");
console.log();
const source_image_file_name = "findsimilar.jpg";
const target_image_file_names = [
"Family1-Dad1.jpg",
"Family1-Daughter1.jpg",
"Family1-Mom1.jpg",
"Family1-Son1.jpg",
"Family2-Lady1.jpg",
"Family2-Man1.jpg",
"Family3-Lady1.jpg",
"Family3-Man1.jpg"
];
let target_face_ids = (await Promise.all (target_image_file_names.map (async function (target_image_file_name) {
// Detect faces from target image url.
var faces = await DetectFaceRecognize(image_base_url + target_image_file_name);
console.log(faces.length + " face(s) detected from image: " + target_image_file_name + ".");
return faces.map (function (face) { return face.faceId });;
}))).flat();
// Detect faces from source image url.
let detected_faces = await DetectFaceRecognize(image_base_url + source_image_file_name);
// Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
let results = await client.face.findSimilar(detected_faces[0].faceId, { faceIds : target_face_ids });
results.forEach (function (result) {
console.log("Faces from: " + source_image_file_name + " and ID: " + result.faceId + " are similar with confidence: " + result.confidence + ".");
});
console.log();
}
Identify a face
The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known. In order to do the Identify operation, you first need to create and train a PersonGroup.
Add faces to PersonGroup
Create the following function to add faces to the PersonGroup.
async function AddFacesToPersonGroup(person_dictionary, person_group_id) {
console.log ("Adding faces to person group...");
// The similar faces will be grouped into a single person group person.
await Promise.all (Object.keys(person_dictionary).map (async function (key) {
const value = person_dictionary[key];
// Wait briefly so we do not exceed rate limits.
await sleep (1000);
let person = await client.personGroupPerson.create(person_group_id, { name : key });
console.log("Create a person group person: " + key + ".");
// Add faces to the person group person.
await Promise.all (value.map (async function (similar_image) {
console.log("Add face to the person group person: (" + key + ") from image: " + similar_image + ".");
await client.personGroupPerson.addFaceFromUrl(person_group_id, person.personId, image_base_url + similar_image);
}));
}));
console.log ("Done adding faces to person group.");
}
Wait for training of PersonGroup
Create the following helper function to wait for the PersonGroup to finish training.
async function WaitForPersonGroupTraining(person_group_id) {
// Wait so we do not exceed rate limits.
console.log ("Waiting 10 seconds...");
await sleep (10000);
let result = await client.personGroup.getTrainingStatus(person_group_id);
console.log("Training status: " + result.status + ".");
if (result.status !== "succeeded") {
await WaitForPersonGroupTraining(person_group_id);
}
}
Create a PersonGroup
The following code:
- Creates a PersonGroup
- Adds faces to the PersonGroup by calling
AddFacesToPersonGroup
, which you defined previously. - Trains the PersonGroup.
- Identifies the faces in the PersonGroup.
This PersonGroup and its associated Person objects are now ready to be used in the Verify, Identify, or Group operations.
async function IdentifyInPersonGroup() {
console.log("========IDENTIFY FACES========");
console.log();
// Create a dictionary for all your images, grouping similar ones under the same key.
const person_dictionary = {
"Family1-Dad" : ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
"Family1-Mom" : ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
"Family1-Son" : ["Family1-Son1.jpg", "Family1-Son2.jpg"],
"Family1-Daughter" : ["Family1-Daughter1.jpg", "Family1-Daughter2.jpg"],
"Family2-Lady" : ["Family2-Lady1.jpg", "Family2-Lady2.jpg"],
"Family2-Man" : ["Family2-Man1.jpg", "Family2-Man2.jpg"]
};
// A group photo that includes some of the persons you seek to identify from your dictionary.
let source_image_file_name = "identification1.jpg";
// Create a person group.
console.log("Creating a person group with ID: " + person_group_id);
await client.personGroup.create(person_group_id, { name : person_group_id, recognitionModel : "recognition_03" });
await AddFacesToPersonGroup(person_dictionary, person_group_id);
// Start to train the person group.
console.log();
console.log("Training person group: " + person_group_id + ".");
await client.personGroup.train(person_group_id);
await WaitForPersonGroupTraining(person_group_id);
console.log();
// Detect faces from source image url.
let face_ids = (await DetectFaceRecognize(image_base_url + source_image_file_name)).map (face => face.faceId);
// Identify the faces in a person group.
let results = await client.face.identify(face_ids, { personGroupId : person_group_id});
await Promise.all (results.map (async function (result) {
let person = await client.personGroupPerson.get(person_group_id, result.candidates[0].personId);
console.log("Person: " + person.name + " is identified for face in: " + source_image_file_name + " with ID: " + result.faceId + ". Confidence: " + result.candidates[0].confidence + ".");
}));
console.log();
}
Tip
You can also create a PersonGroup from local images. See the PersonGroupPerson methods such as AddFaceFromStream.
Main
Finally, create the main
function and call it.
async function main() {
await DetectFaceExtract();
await FindSimilar();
await IdentifyInPersonGroup();
console.log ("Done.");
}
main();
Run the application
Run the application with the node
command on your quickstart file.
node index.js
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
Next steps
In this quickstart, you learned how to use the Face client library for JavaScript to do basis facial recognition tasks. Next, explore the reference documentation to learn more about the library.
- What is the Face service?
- The source code for this sample can be found on GitHub.
Get started with facial recognition using the Face client library for Python. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.
Use the Face client library for Python to:
- Detect faces in an image
- Find similar faces
- Create and train a PersonGroup
- Identify a face
- Verify faces
Reference documentation | Library source code | Package (PiPy) | Samples
Prerequisites
- Azure subscription - Create one for free
- Python 3.x
- Your Python installation should include pip. You can check if you have pip installed by running
pip --version
on the command line. Get pip by installing the latest version of Python.
- Your Python installation should include pip. You can check if you have pip installed by running
- Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
- You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
- You can use the free pricing tier (
F0
) to try the service, and upgrade later to a paid tier for production.
Setting up
Install the client library
After installing Python, you can install the client library with:
pip install --upgrade azure-cognitiveservices-vision-face
Create a new Python application
Create a new Python script—quickstart-file.py, for example. Then open it in your preferred editor or IDE and import the following libraries.
import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
# To install this module, run:
# python -m pip install Pillow
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
Then, create variables for your resource's Azure endpoint and key.
# This key will serve all examples in this document.
KEY = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE"
# This endpoint will be used in all examples in this quickstart.
ENDPOINT = "PASTE_YOUR_FACE_ENDPOINT_HERE"
Important
Go to the Azure portal. If the Face resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.
Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For example, Azure key vault.
Object model
The following classes and interfaces handle some of the major features of the Face Python client library.
Name | Description |
---|---|
FaceClient | This class represents your authorization to use the Face service, and you need it for all Face functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes. |
FaceOperations | This class handles the basic detection and recognition tasks that you can do with human faces. |
DetectedFace | This class represents all of the data that was detected from a single face in an image. You can use it to retrieve detailed information about the face. |
FaceListOperations | This class manages the cloud-stored FaceList constructs, which store an assorted set of faces. |
PersonGroupPersonOperations | This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person. |
PersonGroupOperations | This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects. |
ShapshotOperations | This class manages the Snapshot functionality; you can use it to temporarily save all of your cloud-based face data and migrate that data to a new Azure subscription. |
Code examples
These code snippets show you how to do the following tasks with the Face client library for Python:
- Authenticate the client
- Detect faces in an image
- Find similar faces
- Create and train a PersonGroup
- Identify a face
- Verify faces
Authenticate the client
Instantiate a client with your endpoint and key. Create a CognitiveServicesCredentials object with your key, and use it with your endpoint to create a FaceClient object.
# Create an authenticated FaceClient.
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
Detect faces in an image
The following code detects a face in a remote image. It prints the detected face's ID to the console and also stores it in program memory. Then, it detects the faces in an image with multiple people and prints their IDs to the console as well. By changing the parameters in the detect_with_url method, you can return different information with each DetectedFace object.
# Detect a face in an image that contains a single face
single_face_image_url = 'https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg'
single_image_name = os.path.basename(single_face_image_url)
# We use detection model 3 to get better performance.
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03')
if not detected_faces:
raise Exception('No face detected from image {}'.format(single_image_name))
# Display the detected face ID in the first single-face image.
# Face IDs are used for comparison to faces (their IDs) detected in other images.
print('Detected face ID from', single_image_name, ':')
for face in detected_faces: print (face.face_id)
print()
# Save this ID for use in Find Similar
first_image_face_ID = detected_faces[0].face_id
Tip
You can also detect faces in a local image. See the FaceOperations methods such as detect_with_stream.
Display and frame faces
The following code outputs the given image to the display and draws rectangles around the faces, using the DetectedFace.faceRectangle property.
# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
# We use detection model 3 to get better performance.
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03')
if not detected_faces:
raise Exception('No face detected from image {}'.format(single_image_name))
# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
rect = faceDictionary.face_rectangle
left = rect.left
top = rect.top
right = left + rect.width
bottom = top + rect.height
return ((left, top), (right, bottom))
# Download the image from the url
response = requests.get(single_face_image_url)
img = Image.open(BytesIO(response.content))
# For each face returned use the face rectangle and draw a red box.
print('Drawing rectangle around face... see popup for results.')
draw = ImageDraw.Draw(img)
for face in detected_faces:
draw.rectangle(getRectangle(face), outline='red')
# Display the image in the users default image browser.
img.show()
Find similar faces
The following code takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.
Find matches
First, run the code in the above section (Detect faces in an image) to save a reference to a single face. Then run the following code to get references to several faces in a group image.
# Detect the faces in an image that contains multiple faces
# Each detected face gets assigned a new ID
multi_face_image_url = "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
multi_image_name = os.path.basename(multi_face_image_url)
# We use detection model 3 to get better performance.
detected_faces2 = face_client.face.detect_with_url(url=multi_face_image_url, detection_model='detection_03')
Then add the following code block to find instances of the first face in the group. See the find_similar method to learn how to modify this behavior.
# Search through faces detected in group image for the single face from first image.
# First, create a list of the face IDs found in the second image.
second_image_face_IDs = list(map(lambda x: x.face_id, detected_faces2))
# Next, find similar face IDs like the one detected in the first image.
similar_faces = face_client.face.find_similar(face_id=first_image_face_ID, face_ids=second_image_face_IDs)
if not similar_faces:
print('No similar faces found in', multi_image_name, '.')
Print matches
Use the following code to print the match details to the console.
# Print the details of the similar faces detected
else:
print('Similar faces found in', multi_image_name + ':')
for face in similar_faces:
first_image_face_ID = face.face_id
# The similar face IDs of the single face image and the group image do not need to match,
# they are only used for identification purposes in each image.
# The similar faces are matched using the Cognitive Services algorithm in find_similar().
face_info = next(x for x in detected_faces2 if x.face_id == first_image_face_ID)
if face_info:
print(' Face ID: ', first_image_face_ID)
print(' Face rectangle:')
print(' Left: ', str(face_info.face_rectangle.left))
print(' Top: ', str(face_info.face_rectangle.top))
print(' Width: ', str(face_info.face_rectangle.width))
print(' Height: ', str(face_info.face_rectangle.height))
Create and train a PersonGroup
The following code creates a PersonGroup with three different Person objects. It associates each Person with a set of example images, and then it trains to be able to recognize each person.
Create PersonGroup
To step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.
This group of images contains three sets of face images corresponding to three different people. The code will define three Person objects and associate them with image files that start with woman
, man
, and child
.
Once you've set up your images, define a label at the top of your script for the PersonGroup object you'll create.
# Used in the Person Group Operations and Delete Person Group examples.
# You can call list_person_groups to print a list of preexisting PersonGroups.
# SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
# Used for the Delete Person Group example.
TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
Then add the following code to the bottom of your script. This code creates a PersonGroup and three Person objects.
'''
Create the PersonGroup
'''
# Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
print('Person group:', PERSON_GROUP_ID)
face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)
# Define woman friend
woman = face_client.person_group_person.create(PERSON_GROUP_ID, "Woman")
# Define man friend
man = face_client.person_group_person.create(PERSON_GROUP_ID, "Man")
# Define child friend
child = face_client.person_group_person.create(PERSON_GROUP_ID, "Child")
Assign faces to Persons
The following code sorts your images by their prefix, detects faces, and assigns the faces to each Person object.
'''
Detect faces and register to correct person
'''
# Find all jpeg images of friends in working directory
woman_images = [file for file in glob.glob('*.jpg') if file.startswith("w")]
man_images = [file for file in glob.glob('*.jpg') if file.startswith("m")]
child_images = [file for file in glob.glob('*.jpg') if file.startswith("ch")]
# Add to a woman person
for image in woman_images:
w = open(image, 'r+b')
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, woman.person_id, w)
# Add to a man person
for image in man_images:
m = open(image, 'r+b')
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, man.person_id, m)
# Add to a child person
for image in child_images:
ch = open(image, 'r+b')
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, child.person_id, ch)
Tip
You can also create a PersonGroup from remote images referenced by URL. See the PersonGroupPersonOperations methods such as add_face_from_url.
Train PersonGroup
Once you've assigned faces, you must train the PersonGroup so that it can identify the visual features associated with each of its Person objects. The following code calls the asynchronous train method and polls the result, printing the status to the console.
'''
Train PersonGroup
'''
print()
print('Training the person group...')
# Train the person group
face_client.person_group.train(PERSON_GROUP_ID)
while (True):
training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
print("Training status: {}.".format(training_status.status))
print()
if (training_status.status is TrainingStatusType.succeeded):
break
elif (training_status.status is TrainingStatusType.failed):
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
sys.exit('Training the person group has failed.')
time.sleep(5)
Tip
The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.
Identify a face
The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image (facial recognition search). It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.
Important
In order to run this example, you must first run the code in Create and train a PersonGroup.
Get a test image
The following code looks in the root of your project for an image test-image-person-group.jpg and detects the faces in the image. You can find this image with the images used for PersonGroup management: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.
'''
Identify a face against a defined PersonGroup
'''
# Group image for testing against
test_image_array = glob.glob('test-image-person-group.jpg')
image = open(test_image_array[0], 'r+b')
print('Pausing for 60 seconds to avoid triggering rate limit on free account...')
time.sleep (60)
# Detect faces
face_ids = []
# We use detection model 3 to get better performance.
faces = face_client.face.detect_with_stream(image, detection_model='detection_03')
for face in faces:
face_ids.append(face.face_id)
Identify faces
The identify method takes an array of detected faces and compares them to a PersonGroup. If it can match a detected face to a Person, it saves the result. This code prints detailed match results to the console.
# Identify faces
results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
print('Identifying faces in {}'.format(os.path.basename(image.name)))
if not results:
print('No person identified in the person group for faces from {}.'.format(os.path.basename(image.name)))
for person in results:
if len(person.candidates) > 0:
print('Person for face ID {} is identified in {} with a confidence of {}.'.format(person.face_id, os.path.basename(image.name), person.candidates[0].confidence)) # Get topmost confidence score
else:
print('No person identified for face ID {} in {}.'.format(person.face_id, os.path.basename(image.name)))
Verify faces
The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.
The following code detects faces in two source images and then verifies them against a face detected from a target image.
Get test images
The following code blocks declare variables that will point to the source and target images for the verification operation.
# Base url for the Verify and Facelist/Large Facelist operations
IMAGE_BASE_URL = 'https://csdx.blob.core.windows.net/resources/Face/Images/'
# Create a list to hold the target photos of the same person
target_image_file_names = ['Family1-Dad1.jpg', 'Family1-Dad2.jpg']
# The source photos contain this person
source_image_file_name1 = 'Family1-Dad3.jpg'
source_image_file_name2 = 'Family1-Son1.jpg'
Detect faces for verification
The following code detects faces in the source and target images and saves them to variables.
# Detect face(s) from source image 1, returns a list[DetectedFaces]
# We use detection model 3 to get better performance.
detected_faces1 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name1, detection_model='detection_03')
# Add the returned face's face ID
source_image1_id = detected_faces1[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces1), source_image_file_name1))
# Detect face(s) from source image 2, returns a list[DetectedFaces]
detected_faces2 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name2, detection_model='detection_03')
# Add the returned face's face ID
source_image2_id = detected_faces2[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces2), source_image_file_name2))
# List for the target face IDs (uuids)
detected_faces_ids = []
# Detect faces from target image url list, returns a list[DetectedFaces]
for image_file_name in target_image_file_names:
# We use detection model 3 to get better performance.
detected_faces = face_client.face.detect_with_url(IMAGE_BASE_URL + image_file_name, detection_model='detection_03')
# Add the returned face's face ID
detected_faces_ids.append(detected_faces[0].face_id)
print('{} face(s) detected from image {}.'.format(len(detected_faces), image_file_name))
Get verification results
The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.
# Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
# Since target faces are the same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_same = face_client.face.verify_face_to_face(source_image1_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
.format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence)
if verify_result_same.is_identical
else 'Faces from {} & {} are of a different person, with confidence: {}'
.format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence))
# Verification example for faces of different persons.
# Since target faces are same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_diff = face_client.face.verify_face_to_face(source_image2_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
.format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence)
if verify_result_diff.is_identical
else 'Faces from {} & {} are of a different person, with confidence: {}'
.format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence))
Run the application
Run your face recognition app from the application directory with the python
command.
python quickstart-file.py
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your script:
# Delete the main person group.
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))
print()
Next steps
In this quickstart, you learned how to use the Face client library for Python to do basis facial recognition tasks. Next, explore the reference documentation to learn more about the library.
- What is the Face service?
- The source code for this sample can be found on GitHub.
Get started with facial recognition using the Face REST API. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.
Use the Face REST API to:
Note
This quickstart uses cURL commands to call the REST API. You can also call the REST API using a programming language. See the GitHub samples for examples in C#, Python, Java, JavaScript, and Go.
Prerequisites
- Azure subscription - Create one for free
- Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
- You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
- You can use the free pricing tier (
F0
) to try the service, and upgrade later to a paid tier for production.
- PowerShell version 6.0+, or a similar command-line application.
Detect faces in an image
You'll use a command like the following to call the Face API and get face attribute data from an image. First, copy the code into a text editor—you'll need to make changes to certain parts of the command before you can run it.
curl -H "Ocp-Apim-Subscription-Key: TODO_INSERT_YOUR_FACE_SUBSCRIPTION_KEY_HERE" "TODO_INSERT_YOUR_FACE_ENDPOINT_HERE/face/v1.0/detect?detectionModel=detection_02&returnFaceId=true&returnFaceLandmarks=false" -H "Content-Type: application/json" --data-ascii "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/c/c3/RH_Louise_Lillian_Gish.jpg\"}"
Make the following changes:
- Assign
Ocp-Apim-Subscription-Key
to your valid Face subscription key. - Change the first part of the query URL to match the endpoint that corresponds to your subscription key.
Note
New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.
- Optionally change the URL in the body of the request to point to a different image.
Once you've made your changes, open a command prompt and enter the new command.
Examine the results
You should see the face information displayed as JSON data in the console window. For example:
[
{
"faceId": "49d55c17-e018-4a42-ba7b-8cbbdfae7c6f",
"faceRectangle": {
"top": 131,
"left": 177,
"width": 162,
"height": 162
}
}
]
Get face attributes
To extract face attributes, call the Detect API again, but set detectionModel
to detection_01
. Add the returnFaceAttributes
query parameter as well. The command should now look like the following. As before, insert your Face subscription key and endpoint.
curl -H "Ocp-Apim-Subscription-Key: TODO_INSERT_YOUR_FACE_SUBSCRIPTION_KEY_HERE" "TODO_INSERT_YOUR_FACE_ENDPOINT_HERE/face/v1.0/detect?detectionModel=detection_01&returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise" -H "Content-Type: application/json" --data-ascii "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/c/c3/RH_Louise_Lillian_Gish.jpg\"}"
Examine the results
The returned face information now includes face attributes. For example:
[
{
"faceId": "49d55c17-e018-4a42-ba7b-8cbbdfae7c6f",
"faceRectangle": {
"top": 131,
"left": 177,
"width": 162,
"height": 162
},
"faceAttributes": {
"smile": 0,
"headPose": {
"pitch": 0,
"roll": 0.1,
"yaw": -32.9
},
"gender": "female",
"age": 22.9,
"facialHair": {
"moustache": 0,
"beard": 0,
"sideburns": 0
},
"glasses": "NoGlasses",
"emotion": {
"anger": 0,
"contempt": 0,
"disgust": 0,
"fear": 0,
"happiness": 0,
"neutral": 0.986,
"sadness": 0.009,
"surprise": 0.005
},
"blur": {
"blurLevel": "low",
"value": 0.06
},
"exposure": {
"exposureLevel": "goodExposure",
"value": 0.67
},
"noise": {
"noiseLevel": "low",
"value": 0
},
"makeup": {
"eyeMakeup": true,
"lipMakeup": true
},
"accessories": [],
"occlusion": {
"foreheadOccluded": false,
"eyeOccluded": false,
"mouthOccluded": false
},
"hair": {
"bald": 0,
"invisible": false,
"hairColor": [
{
"color": "brown",
"confidence": 1
},
{
"color": "black",
"confidence": 0.87
},
{
"color": "other",
"confidence": 0.51
},
{
"color": "blond",
"confidence": 0.08
},
{
"color": "red",
"confidence": 0.08
},
{
"color": "gray",
"confidence": 0.02
}
]
}
}
}
]
Find similar faces
This operation takes a single detected face (source) and searches a set of other faces (target) to find matches (face search by image). When it finds a match, it prints the ID of the matched face to the console.
Detect faces for comparison
First, you need to detect faces in images before you can compare them. Run this command as you did in the Detect faces section. This detection method is optimized for comparison operations. It doesn't extract detailed face attributes like in the section above, and it uses a different detection model.
curl -H "Ocp-Apim-Subscription-Key: TODO_INSERT_YOUR_FACE_SUBSCRIPTION_KEY_HERE" "TODO_INSERT_YOUR_FACE_ENDPOINT_HERE/face/v1.0/detect?detectionModel=detection_02&returnFaceId=true&returnFaceLandmarks=false" -H "Content-Type: application/json" --data-ascii "{\"url\":\"https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Dad1.jpg\"}"
Find the "faceId"
value in the JSON response and save it to a temporary location. Then, call the above command again for these other image URLs, and save their face IDs as well. You'll use these IDs as the target group of faces from which to find a similar face.
https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg
https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Mom1.jpg
https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Son1.jpg
https://csdx.blob.core.windows.net/resources/Face/Images/Family2-Lady1.jpg
https://csdx.blob.core.windows.net/resources/Face/Images/Family2-Man1.jpg
https://csdx.blob.core.windows.net/resources/Face/Images/Family3-Lady1.jpg
https://csdx.blob.core.windows.net/resources/Face/Images/Family3-Man1.jpg
Finally, detect the single source face that you'll use for matching, and save its ID. Keep this ID separate from the others.
https://csdx.blob.core.windows.net/resources/Face/Images/findsimilar.jpg
Find matches
Copy the following command to a text editor.
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/findsimilars" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"
Then make the following changes:
- Assign
Ocp-Apim-Subscription-Key
to your valid Face subscription key. - Change the first part of the query URL to match the endpoint that corresponds to your subscription key.
Use the following JSON content for the body
value:
{
"faceId": "",
"faceIds": [],
"maxNumOfCandidatesReturned": 10,
"mode": "matchPerson"
}
- Use the source face ID for
"faceId"
. - Paste the other face IDs as terms in the
"faceIds"
array.
Examine the results
You'll receive a JSON response that lists the IDs of the faces that match your query face.
[
{
"persistedFaceId" : "015839fb-fbd9-4f79-ace9-7675fc2f1dd9",
"confidence" : 0.82
},
...
]
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
Next steps
In this quickstart, you learned how to use the Face REST API to do basic facial recognition tasks. Next, explore the reference documentation to learn more about the library.