Exercise - Test the Face Detection API

Completed

Now let’s use the Face API subscription you just created to detect faces in images from a website. We’re going to use the client SDK to keep our code simple and concise. The same functionality is available using the REST API directly.

Environment setup

The environment for this exercise will make use of Visual Studio Code as the editor. Depending on the programming language you choose, the setup will differ. Follow the steps outlined here to configure your local computer to complete these exercises.

Python

  1. If you will be completing your coding with Python, ensure you have a Python environment installed locally.

  2. After you have Python installed, you will need to install the extension for VS Code.

C#

  1. If you will be using C# as your code language, start by installing the latest .NET Core package for your platform. You can choose Windows, Linux, or macOS from the drop-down on this page.

  2. After you have .NET Core installed, you will need to add the C# Extension to VS Code. Select the Extensions option in the left menu pane, or press Ctrl+Shift+X, and enter C# in the search dialog box.

With your environment set up, you are now ready to begin the coding exercise.

Set environment variables

  1. Set two environment variables, replacing <subscription_key> and with your own values. Your code will reference these values so make sure they are set within the same session you run the code.

  2. Select your environment from the following options: bash, Windows shell, or PowerShell.

    export COGNITIVE_SERVICE_KEY=<subscription_key>
    export COGNITIVE_SERVICE_ENDPOINT=<endpoint>
    
      set COGNITIVE_SERVICE_KEY=<subscription_key>
    set COGNITIVE_SERVICE_ENDPOINT=<endpoint>
    
    $env:COGNITIVE_SERVICE_KEY = '<subscription_key>'
    $env:COGNITIVE_SERVICE_ENDPOINT = '<endpoint>'
    

Detect faces using FaceClient

  1. Create a new folder to hold your application called Detect_Faces_Python.

  2. Open the folder in Visual Studio Code.

  3. Create a new Python file called facedetect.py.

  4. Install the Cognitive Services FaceClient library by opening a terminal, and running the following command.

    pip install azure-cognitiveservices-vision-face
    
  5. In your facedetect.py file, import required libraries.

    import os
    from msrest.authentication import CognitiveServicesCredentials
    from azure.cognitiveservices.vision.face import FaceClient
    
  6. Create a method to get credential and set up client. This reads in the environment variables for Subscription Key and Endpoint.

    def get_face_client():
        """Create an authenticated FaceClient."""
        SUBSCRIPTION_KEY = os.environ["COGNITIVE_SERVICE_KEY"]
        ENDPOINT = os.environ["COGNITIVE_SERVICE_ENDPOINT"]
        credential = CognitiveServicesCredentials(SUBSCRIPTION_KEY)
        return FaceClient(ENDPOINT, credential)
    
  7. Call the face detect operation, and print results as JSON. Set the url value to an image you choose, or just use the value in this example.

    face_client = get_face_client()
    
    url = "https://docs.microsoft.com/en-us/learn/data-ai-cert/identify-faces-with-computer-vision/media/clo19_ubisoft_azure_068.png"
    
    attributes = ["emotion", "glasses", "smile"]
    include_id = True
    include_landmarks = False
    
    detected_faces = face_client.face.detect_with_url(url, include_id, include_landmarks, attributes, raw=True)
    
    if not detected_faces:
        raise Exception('No face detected in image')
    
    print(detected_faces.response.json())
    
  8. Select the Run Python File in Terminal (green arrow), or right-click in the Python file editor, and select Run Python File in Terminal.

  9. View the output. If your image has one face, you will see one entry in the JSON array with elements faceId and faceRectangle. It also includes a few face attributes because we passed in the attributes array. If you were to change the third parameter from false to true, it would also include face landmarks.

    [
      {
        "faceId": "36c3e7f8-2194-4eed-88cb-15dc8924c8fe",
        "faceRectangle": {
          "width": 161,
          "height": 161,
          "left": 920,
          "top": 303
        },
        "faceLandmarks": null,
        "faceAttributes": {
          "smile": 0.978,
          "glasses": "readingGlasses",
          "emotion": {
            "happiness": 0.978,
            "neutral": 0.019,
            "sadness": 0.002
          }
        }
      }
    ]
    
  1. Open Visual Studio Code.

  2. Create a new folder to hold your application called Detect_Faces_Csharp.

  3. Right-click the folder name in Visual Studio Code, and select Open in integrated terminal.

  4. You will use a .NET Core application for this exercise so enter the following command, and then press Enter.

    dotnet new console
    
  5. Install the Cognitive Services FaceClient library by running this command in the terminal window.

    dotnet add package Microsoft.Azure.CognitiveServices.Vision.Face --version 2.5.0-preview.1
    
  6. Select Program.cs in the file list, in case it isn't already open in the editor window.

  7. Import the required libraries.

    using System;
    using static System.Environment;
    using System.Threading.Tasks;
    using Newtonsoft.Json;
    
    using Microsoft.Azure.CognitiveServices.Vision.Face;
    using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
    
  8. Create a method to get credentials and setup the client. This reads in the environment variables for Subscription Key and Endpoint.

    public static FaceClient GetFaceClient()
    {
        var serviceEndpoint = GetEnvironmentVariable("COGNITIVE_SERVICE_ENDPOINT");
        var subscriptionKey = GetEnvironmentVariable("COGNITIVE_SERVICE_KEY");
        var credential = new ApiKeyServiceClientCredentials(subscriptionKey);
        return new FaceClient(credential) { Endpoint = serviceEndpoint };
    }
    
  9. Call the face detect operation, and print results as JSON. Set the url value to an image you choose, or just use the value in this example.

    static async Task Main(string[] args)
    {
        var faceClient = GetFaceClient();
    
        var url = "https://docs.microsoft.com/en-us/learn/data-ai-cert/identify-faces-with-computer-vision/media/clo19_ubisoft_azure_068.png";
    
        var attributes = new FaceAttributeType[] {
            FaceAttributeType.Emotion,
            FaceAttributeType.Glasses,
            FaceAttributeType.Smile
        };
        bool includeId = true;
        bool includeLandmarks = false;
    
        var result = await faceClient.Face.DetectWithUrlAsync(url, includeId, includeLandmarks, attributes);
        Console.WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
    
    }
    
  10. In the terminal window, run the application with this line of code.

    dotnet run
    
  11. View the output. If your image has one face, you will see one entry in the JSON array with elements faceId and faceRectangle. It also includes a few face attributes because we passed in the attributes array. If you were to change the third parameter from false to true, it would also include face landmarks.

    [
      {
        "faceId": "36c3e7f8-2194-4eed-88cb-15dc8924c8fe",
        "faceRectangle": {
          "width": 161,
          "height": 161,
          "left": 920,
          "top": 303
        },
        "faceLandmarks": null,
        "faceAttributes": {
          "smile": 0.978,
          "glasses": "readingGlasses",
          "emotion": {
            "happiness": 0.978,
            "neutral": 0.019,
            "sadness": 0.002
          }
        }
      }
    ]