Tutorial: Create a Windows Presentation Framework (WPF) app to display face data in an image

In this tutorial, you'll learn how to use the Azure Face service, through the .NET client SDK, to detect faces in an image and then present that data in the UI. You'll create a WPF application that detects faces, draws a frame around each face, and displays a description of the face in the status bar.

This tutorial shows you how to:

  • Create a WPF application
  • Install the Face client library
  • Use the client library to detect faces in an image
  • Draw a frame around each detected face
  • Display a description of the highlighted face on the status bar

Screenshot showing detected faces framed with rectangles

The complete sample code is available in the Cognitive Face CSharp sample repository on GitHub.

If you don't have an Azure subscription, create a free account before you begin.


  • Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • Create environment variables for the key and service endpoint string, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT, respectively.

Create the Visual Studio project

Follow these steps to create a new WPF application project.

  1. In Visual Studio, open the New Project dialog. Expand Installed, then Visual C#, then select WPF App (.NET Framework).
  2. Name the application FaceTutorial, then click OK.
  3. Get the required NuGet packages. Right-click on your project in the Solution Explorer and select Manage NuGet Packages; then, find and install the following package:

Add the initial code

In this section, you will add the basic framework of the app without its face-specific features.

Create the UI

Open MainWindow.xaml and replace the contents with the following code—this code creates the UI window. The FacePhoto_MouseMove and BrowseButton_Click methods are event handlers that you will define later on.

<Window x:Class="FaceTutorial.MainWindow"
         Title="MainWindow" Height="700" Width="960">
    <Grid x:Name="BackPanel">
        <Image x:Name="FacePhoto" Stretch="Uniform" Margin="0,0,0,50" MouseMove="FacePhoto_MouseMove" />
        <DockPanel DockPanel.Dock="Bottom">
            <Button x:Name="BrowseButton" Width="72" Height="20" VerticalAlignment="Bottom" HorizontalAlignment="Left"
                     Click="BrowseButton_Click" />
            <StatusBar VerticalAlignment="Bottom">
                    <TextBlock Name="faceDescriptionStatusBar" />

Create the main class

Open MainWindow.xaml.cs and add the client library namespaces, along with other necessary namespaces.

using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;

Next, insert the following code in the MainWindow class. This code creates a FaceClient instance using the subscription key and endpoint.

// Add your Face subscription key to your environment variables.
private static string subscriptionKey = Environment.GetEnvironmentVariable("FACE_SUBSCRIPTION_KEY");
// Add your Face endpoint to your environment variables.
private static string faceEndpoint = Environment.GetEnvironmentVariable("FACE_ENDPOINT");

private readonly IFaceClient faceClient = new FaceClient(
    new ApiKeyServiceClientCredentials(subscriptionKey),
    new System.Net.Http.DelegatingHandler[] { });

// The list of detected faces.
private IList<DetectedFace> faceList;
// The list of descriptions for the detected faces.
private string[] faceDescriptions;
// The resize factor for the displayed image.
private double resizeFactor;

private const string defaultStatusBarText =
    "Place the mouse pointer over a face to see the face description.";

Next add the MainWindow constructor. It checks your endpoint URL string and then associates it with the client object.

public MainWindow()

    if (Uri.IsWellFormedUriString(faceEndpoint, UriKind.Absolute))
        faceClient.Endpoint = faceEndpoint;
            "Invalid URI", MessageBoxButton.OK, MessageBoxImage.Error);

Finally, add the BrowseButton_Click and FacePhoto_MouseMove methods to the class. These methods correspond to the event handlers declared in MainWindow.xaml. The BrowseButton_Click method creates an OpenFileDialog, which allows the user to select a .jpg image. It then displays the image in the main window. You will insert the remaining code for BrowseButton_Click and FacePhoto_MouseMove in later steps. Also note the faceList reference—a list of DetectedFace objects. This reference is where your app will store and call the actual face data.

// Displays the image and calls UploadAndDetectFaces.
private async void BrowseButton_Click(object sender, RoutedEventArgs e)
    // Get the image file to scan from the user.
    var openDlg = new Microsoft.Win32.OpenFileDialog();

    openDlg.Filter = "JPEG Image(*.jpg)|*.jpg";
    bool? result = openDlg.ShowDialog(this);

    // Return if canceled.
    if (!(bool)result)

    // Display the image file.
    string filePath = openDlg.FileName;

    Uri fileUri = new Uri(filePath);
    BitmapImage bitmapSource = new BitmapImage();

    bitmapSource.CacheOption = BitmapCacheOption.None;
    bitmapSource.UriSource = fileUri;

    FacePhoto.Source = bitmapSource;
// Displays the face description when the mouse is over a face rectangle.
private void FacePhoto_MouseMove(object sender, MouseEventArgs e)

Try the app

Press Start on the menu to test your app. When the app window opens, click Browse in the lower left corner. A File Open dialog should appear. Select an image from your filesystem and verify that it displays in the window. Then, close the app and advance to the next step.

Screenshot showing unmodified image of faces

Upload image and detect faces

Your app will detect faces by calling the FaceClient.Face.DetectWithStreamAsync method, which wraps the Detect REST API for uploading a local image.

Insert the following method in the MainWindow class, below the FacePhoto_MouseMove method. This method defines a list of face attributes to retrieve and reads the submitted image file into a Stream. Then it passes both of these objects to the DetectWithStreamAsync method call.

// Uploads the image file and calls DetectWithStreamAsync.
private async Task<IList<DetectedFace>> UploadAndDetectFaces(string imageFilePath)
    // The list of Face attributes to return.
    IList<FaceAttributeType?> faceAttributes =
        new FaceAttributeType?[]
            FaceAttributeType.Gender, FaceAttributeType.Age,
            FaceAttributeType.Smile, FaceAttributeType.Emotion,
            FaceAttributeType.Glasses, FaceAttributeType.Hair

    // Call the Face API.
        using (Stream imageFileStream = File.OpenRead(imageFilePath))
            // The second argument specifies to return the faceId, while
            // the third argument specifies not to return face landmarks.
            IList<DetectedFace> faceList =
                await faceClient.Face.DetectWithStreamAsync(
                    imageFileStream, true, false, faceAttributes);
            return faceList;
    // Catch and display Face API errors.
    catch (APIErrorException f)
        return new List<DetectedFace>();
    // Catch and display all other errors.
    catch (Exception e)
        MessageBox.Show(e.Message, "Error");
        return new List<DetectedFace>();

Draw rectangles around faces

Next, you will add the code to draw a rectangle around each detected face in the image. In the MainWindow class, insert the following code at the end of the BrowseButton_Click method, after the FacePhoto.Source = bitmapSource line. This code populates a list of detected faces from the call to UploadAndDetectFaces. Then it draws a rectangle around each face and displays the modified image in the main window.

// Detect any faces in the image.
Title = "Detecting...";
faceList = await UploadAndDetectFaces(filePath);
Title = String.Format(
    "Detection Finished. {0} face(s) detected", faceList.Count);

if (faceList.Count > 0)
    // Prepare to draw rectangles around the faces.
    DrawingVisual visual = new DrawingVisual();
    DrawingContext drawingContext = visual.RenderOpen();
        new Rect(0, 0, bitmapSource.Width, bitmapSource.Height));
    double dpi = bitmapSource.DpiX;
    // Some images don't contain dpi info.
    resizeFactor = (dpi == 0) ? 1 : 96 / dpi;
    faceDescriptions = new String[faceList.Count];

    for (int i = 0; i < faceList.Count; ++i)
        DetectedFace face = faceList[i];

        // Draw a rectangle on the face.
            new Pen(Brushes.Red, 2),
            new Rect(
                face.FaceRectangle.Left * resizeFactor,
                face.FaceRectangle.Top * resizeFactor,
                face.FaceRectangle.Width * resizeFactor,
                face.FaceRectangle.Height * resizeFactor

        // Store the face description.
        faceDescriptions[i] = FaceDescription(face);


    // Display the image with the rectangle around the face.
    RenderTargetBitmap faceWithRectBitmap = new RenderTargetBitmap(
        (int)(bitmapSource.PixelWidth * resizeFactor),
        (int)(bitmapSource.PixelHeight * resizeFactor),

    FacePhoto.Source = faceWithRectBitmap;

    // Set the status bar text.
    faceDescriptionStatusBar.Text = defaultStatusBarText;

Describe the faces

Add the following method to the MainWindow class, below the UploadAndDetectFaces method. This method converts the retrieved face attributes into a string describing the face.

// Creates a string out of the attributes describing the face.
private string FaceDescription(DetectedFace face)
    StringBuilder sb = new StringBuilder();

    sb.Append("Face: ");

    // Add the gender, age, and smile.
    sb.Append(", ");
    sb.Append(", ");
    sb.Append(String.Format("smile {0:F1}%, ", face.FaceAttributes.Smile * 100));

    // Add the emotions. Display all emotions over 10%.
    sb.Append("Emotion: ");
    Emotion emotionScores = face.FaceAttributes.Emotion;
    if (emotionScores.Anger >= 0.1f) sb.Append(
        String.Format("anger {0:F1}%, ", emotionScores.Anger * 100));
    if (emotionScores.Contempt >= 0.1f) sb.Append(
        String.Format("contempt {0:F1}%, ", emotionScores.Contempt * 100));
    if (emotionScores.Disgust >= 0.1f) sb.Append(
        String.Format("disgust {0:F1}%, ", emotionScores.Disgust * 100));
    if (emotionScores.Fear >= 0.1f) sb.Append(
        String.Format("fear {0:F1}%, ", emotionScores.Fear * 100));
    if (emotionScores.Happiness >= 0.1f) sb.Append(
        String.Format("happiness {0:F1}%, ", emotionScores.Happiness * 100));
    if (emotionScores.Neutral >= 0.1f) sb.Append(
        String.Format("neutral {0:F1}%, ", emotionScores.Neutral * 100));
    if (emotionScores.Sadness >= 0.1f) sb.Append(
        String.Format("sadness {0:F1}%, ", emotionScores.Sadness * 100));
    if (emotionScores.Surprise >= 0.1f) sb.Append(
        String.Format("surprise {0:F1}%, ", emotionScores.Surprise * 100));

    // Add glasses.
    sb.Append(", ");

    // Add hair.
    sb.Append("Hair: ");

    // Display baldness confidence if over 1%.
    if (face.FaceAttributes.Hair.Bald >= 0.01f)
        sb.Append(String.Format("bald {0:F1}% ", face.FaceAttributes.Hair.Bald * 100));

    // Display all hair color attributes over 10%.
    IList<HairColor> hairColors = face.FaceAttributes.Hair.HairColor;
    foreach (HairColor hairColor in hairColors)
        if (hairColor.Confidence >= 0.1f)
            sb.Append(String.Format(" {0:F1}% ", hairColor.Confidence * 100));

    // Return the built string.
    return sb.ToString();

Display the face description

Add the following code to the FacePhoto_MouseMove method. This event handler displays the face description string in faceDescriptionStatusBar when the cursor hovers over a detected face rectangle.

// If the REST call has not completed, return.
if (faceList == null)

// Find the mouse position relative to the image.
Point mouseXY = e.GetPosition(FacePhoto);

ImageSource imageSource = FacePhoto.Source;
BitmapSource bitmapSource = (BitmapSource)imageSource;

// Scale adjustment between the actual size and displayed size.
var scale = FacePhoto.ActualWidth / (bitmapSource.PixelWidth / resizeFactor);

// Check if this mouse position is over a face rectangle.
bool mouseOverFace = false;

for (int i = 0; i < faceList.Count; ++i)
    FaceRectangle fr = faceList[i].FaceRectangle;
    double left = fr.Left * scale;
    double top = fr.Top * scale;
    double width = fr.Width * scale;
    double height = fr.Height * scale;

    // Display the face description if the mouse is over this face rectangle.
    if (mouseXY.X >= left && mouseXY.X <= left + width &&
        mouseXY.Y >= top && mouseXY.Y <= top + height)
        faceDescriptionStatusBar.Text = faceDescriptions[i];
        mouseOverFace = true;

// String to display when the mouse is not over a face rectangle.
if (!mouseOverFace) faceDescriptionStatusBar.Text = defaultStatusBarText;

Run the app

Run the application and browse for an image containing a face. Wait a few seconds to allow the Face service to respond. You should see a red rectangle on each of the faces in the image. If you move the mouse over a face rectangle, the description of that face should appear in the status bar.

Screenshot showing detected faces framed with rectangles

Next steps

In this tutorial, you learned the basic process for using the Face service .NET SDK and created an application to detect and frame faces in an image. Next, learn more about the details of face detection.