C#에서 Face API 시작 자습서Getting Started with Face API in C# Tutorial

이 자습서에서는 Face API를 사용하는 WPF Windows 응용 프로그램을 만듭니다.In this tutorial, you will create a WPF Windows application that uses the Face API. 응용 프로그램은 이미지에서 얼굴을 감지하고 각 얼굴 주위에 프레임을 그린 다음, 상태 표시줄에 얼굴에 대한 설명을 표시합니다.The application detects faces in an image, draws a frame around each face, and displays a description of the face on the status bar.

GettingStartCSharpScreenshot

준비Preparation

자습서를 사용하려면 다음 필수 조건이 필요합니다.To use the tutorial, you need the following prerequisites:

  • Visual Studio 2015 이상이 설치되어 있는지 확인합니다.Make sure Visual Studio 2015 or higher is installed.

1단계: Face API 구독 및 구독 키 가져오기Step 1: Subscribe to Face API and get your subscription key

Face API를 사용하려면 먼저 Microsoft Cognitive Services 포털에서 등록하여 Face API를 구독해야 합니다.Before using the Face API, you must sign up to subscribe to Face API in the Microsoft Cognitive Services portal. 구독을 참조하세요.See subscriptions. 이 자습서에서는 기본 구독 키 또는 보조 구독 키를 사용할 수 있습니다.Either the primary or secondary subscription key can be used in this tutorial.

2단계: Visual Studio 솔루션 만들기Step 2: Create the Visual Studio Solution

이 단계에서는 Windows WPF 응용 프로그램 프로젝트를 만들어 이미지를 선택하고 표시할 기본 응용 프로그램을 만듭니다.In this step, you create a Windows WPF application project to create a basic application to select and display an image. 다음 지침을 따릅니다.Follow these instructions:

  1. Visual Studio를 엽니다.Open Visual Studio.
  2. 파일 메뉴에서 새로 만들기, 프로젝트를 차례로 클릭합니다.From the File menu, click New, then Project.
  3. 새 프로젝트 대화 상자에서 응용 프로그램의 WPF를 선택합니다.Select WPF for the application in the New Project dialog box.

    Visual Studio 2015에서는 설치됨 > 템플릿 > Visual C# > Windows > 클래식 데스크톱 >을 확장하고 WPF 응용 프로그램을 선택합니다.In Visual Studio 2015, expand Installed > Templates > Visual C# > Windows > Classic Desktop > and select WPF Application.

    Visual Studio 2017에서는 설치됨 > 템플릿 > Visual C# > Windows 클래식 데스크톱 >을 확장하고 WPF 앱(.NET Framework) 을 선택합니다.In Visual Studio 2017, expand Installed > Templates > Visual C# > Windows Classic Desktop > and select WPF App (.NET Framework).

  4. 응용 프로그램 이름을 FaceTutorial로 지정하고 확인을 클릭합니다.Name the application FaceTutorial, then click OK.

    WPF 응용 프로그램이 선택된 새 프로젝트 대화 상자

  5. 솔루션 탐색기를 찾아서 프로젝트(이 경우 FaceTutorial)를 마우스 오른쪽 단추로 클릭한 다음, NuGet 패키지 관리를 클릭합니다.Locate the Solution Explorer, right-click your project (FaceTutorial in this case) and then click Manage NuGet Packages.

  6. NuGet 패키지 관리자 창에서 패키지 원본으로 nuget.org를 선택합니다.In the NuGet Package Manager window, select nuget.org as your Package source.
  7. Newtonsoft.Json을 검색하고 설치를 클릭합니다.Search for Newtonsoft.Json, then Install. Visual Studio 2017에서는 먼저 찾아보기 탭을 클릭한 다음, 검색을 클릭합니다.(In Visual Studio 2017, first click the Browse tab, then Search).

    GettingStartCSharpPackageManager

3단계: Face API 클라이언트 라이브러리 구성Step 3: Configure the Face API client library

Face API는 HTTPS REST 요청을 통해 호출할 수 있는 클라우드 API입니다.Face API is a cloud API that you can invoke through HTTPS REST requests. .NET 응용 프로그램에서 사용하기 쉽도록 .NET 클라이언트 라이브러리는 Face API REST 요청을 캡슐화합니다.For ease-of-use in .NET applications, a .NET client library encapsulates the Face API REST requests. 이 예제에서는 클라이언트 라이브러리를 사용하여 작업을 간소화합니다.In this example, we use the client library to simplify our work.

클라이언트 라이브러리를 구성하려면 다음 지침을 따르세요.Follow these instructions to configure the client library:

  1. 솔루션 탐색기에서 프로젝트(이 경우 FaceTutorial)를 마우스 오른쪽 단추로 클릭한 다음, NuGet 패키지 관리를 클릭합니다.In the Solution Explorer, right-click your project (FaceTutorial in this case) and then click Manage NuGet Packages.
  2. NuGet 패키지 관리자 창에서 패키지 원본으로 nuget.org를 선택합니다.In the NuGet Package Manager window, select nuget.org as your Package source.
  3. Microsoft.ProjectOxford.Face를 검색한 다음, 설치를 클릭합니다.Search for Microsoft.ProjectOxford.Face, then Install. Visual Studio 2017에서는 먼저 찾아보기 탭을 클릭한 다음, 검색을 클릭합니다.(In Visual Studio 2017, first click the Browse tab, then Search).

    GettingStartCSharpPackageManagerSDK

  4. 솔루션 탐색기에서 프로젝트 참조를 확인합니다.In Solution Explorer, check your project references. 설치에 성공하면 Microsoft.ProjectOxford.Common, Microsoft.ProjectOxford.FaceNewtonsoft.Json 참조가 자동으로 추가됩니다.The references Microsoft.ProjectOxford.Common, Microsoft.ProjectOxford.Face, and Newtonsoft.Json are automatically added when the installation succeeds.

    GetStartedCSharp-CheckInstallation.png

4단계: 초기 코드 복사 및 붙여넣기Step 4: Copy and paste the initial code

  1. MainWindow.xaml을 열고 기존 코드를 다음 코드로 바꿔 창 UI를 만듭니다.Open MainWindow.xaml, and replace the existing code with the following code to create the window UI:

    <Window x:Class="FaceTutorial.MainWindow"
            xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
            xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
            Title="MainWindow" Height="700" Width="960">
        <Grid x:Name="BackPanel">
            <Image x:Name="FacePhoto" Stretch="Uniform" Margin="0,0,0,50" MouseMove="FacePhoto_MouseMove" />
            <DockPanel DockPanel.Dock="Bottom">
                <Button x:Name="BrowseButton" Width="72" Height="20" VerticalAlignment="Bottom" HorizontalAlignment="Left"
                        Content="Browse..."
                        Click="BrowseButton_Click" />
                <StatusBar VerticalAlignment="Bottom">
                    <StatusBarItem>
                        <TextBlock Name="faceDescriptionStatusBar" />
                    </StatusBarItem>
                </StatusBar>
            </DockPanel>
        </Grid>
    </Window>
    
  2. MainWindow.xaml.cs를 열고 기존 코드를 다음 코드로 바꿉니다.Open MainWindow.xaml.cs, and replace the existing code with the following code:

    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Text;
    using System.Threading.Tasks;
    using System.Windows;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using Microsoft.ProjectOxford.Common.Contract;
    using Microsoft.ProjectOxford.Face;
    using Microsoft.ProjectOxford.Face.Contract;
    
    namespace FaceTutorial
    {
        public partial class MainWindow : Window
        {
            // Replace the first parameter with your valid subscription key.
            //
            // Replace or verify the region in the second parameter.
            //
            // You must use the same region in your REST API call as you used to obtain your subscription keys.
            // For example, if you obtained your subscription keys from the westus region, replace
            // "westcentralus" in the URI below with "westus".
            //
            // NOTE: Free trial subscription keys are generated in the westcentralus region, so if you are using
            // a free trial subscription key, you should not need to change this region.
            private readonly IFaceServiceClient faceServiceClient =
                new FaceServiceClient("<Subscription Key>", "https://westcentralus.api.cognitive.microsoft.com/face/v1.0");
    
            Face[] faces;                   // The list of detected faces.
            String[] faceDescriptions;      // The list of descriptions for the detected faces.
            double resizeFactor;            // The resize factor for the displayed image.
    
            public MainWindow()
            {
                InitializeComponent();
            }
    
            // Displays the image and calls Detect Faces.
    
            private void BrowseButton_Click(object sender, RoutedEventArgs e)
            {
                // Get the image file to scan from the user.
                var openDlg = new Microsoft.Win32.OpenFileDialog();
    
                openDlg.Filter = "JPEG Image(*.jpg)|*.jpg";
                bool? result = openDlg.ShowDialog(this);
    
                // Return if canceled.
                if (!(bool)result)
                {
                    return;
                }
    
                // Display the image file.
                string filePath = openDlg.FileName;
    
                Uri fileUri = new Uri(filePath);
                BitmapImage bitmapSource = new BitmapImage();
    
                bitmapSource.BeginInit();
                bitmapSource.CacheOption = BitmapCacheOption.None;
                bitmapSource.UriSource = fileUri;
                bitmapSource.EndInit();
    
                FacePhoto.Source = bitmapSource;
            }
    
            // Displays the face description when the mouse is over a face rectangle.
    
            private void FacePhoto_MouseMove(object sender, MouseEventArgs e)
            {
            }
        }
    }
    
  3. 구독 키를 삽입하고 지역을 확인합니다.Insert your subscription key and verify the region.

    MainWindow.xaml.cs 파일에서 다음 줄(줄 28 및 29)을 찾습니다.Find this line in the MainWindow.xaml.cs file (lines 28 and 29):

    private readonly IFaceServiceClient faceServiceClient =
            new FaceServiceClient("<Subscription Key>", "https://westcentralus.api.cognitive.microsoft.com/face/v1.0");
    

    첫 번째 매개 변수의 <Subscription Key>를 1단계의 Face API 구독 키로 바꿉니다.Replace <Subscription Key> in the first parameter with your Face API subscription key from step 1.

    또한 두 번째 매개 변수를 검사하여 구독 키를 획득한 위치를 사용 중인지 확인합니다.Also, check the second parameter to be sure you use the location where you obtained your subscription keys. 예를 들어 westus 지역에서 구독 키를 획득한 경우 URI의 “westcentralus”를 “westus”로 바꿉니다.If you obtained your subscription keys from the westus region, for example, replace "westcentralus" in the URI with "westus".

    평가판을 사용하여 구독 키를 받은 경우 키의 지역이 westcentralus이므로 변경할 필요가 없습니다.If you received your subscription keys by using the free trial, the region for your keys is westcentralus, so no change is required.

이제 앱이 사진을 찾아서 창에 표시할 수 있습니다.Now your app can browse for a photo and display it in the window.

GettingStartCSharpUI

5단계: 이미지를 업로드하여 얼굴 감지Step 5: Upload images to detect faces

얼굴을 감지하는 가장 간단한 방법은 이미지 파일을 직접 업로드하여 Face - 감지 API를 호출하는 것입니다.The most straightforward way to detect faces is by calling the Face - Detect API by uploading the image file directly. 클라이언트 라이브러리를 사용하는 경우 FaceServiceClient의 비동기 메서드 DetectAsync를 사용하여 이 작업을 수행할 수 있습니다.When using the client library, this can be done by using the asynchronous method DetectAsync of FaceServiceClient. 반환된 각 얼굴에는 해당 위치를 표시하는 사각형이 일련의 선택적 얼굴 특성과 함께 포함되어 있습니다.Each returned face contains a rectangle to indicate its location, combined with a series of optional face attributes.

MainWindow 클래스에 다음 코드를 삽입합니다.Insert the following code in the MainWindow class:

// Uploads the image file and calls Detect Faces.

private async Task<Face[]> UploadAndDetectFaces(string imageFilePath)
{
    // The list of Face attributes to return.
    IEnumerable<FaceAttributeType> faceAttributes =
        new FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Emotion, FaceAttributeType.Glasses, FaceAttributeType.Hair };

    // Call the Face API.
    try
    {
        using (Stream imageFileStream = File.OpenRead(imageFilePath))
        {
            Face[] faces = await faceServiceClient.DetectAsync(imageFileStream, returnFaceId: true, returnFaceLandmarks:false, returnFaceAttributes: faceAttributes);
            return faces;
        }
    }
    // Catch and display Face API errors.
    catch (FaceAPIException f)
    {
        MessageBox.Show(f.ErrorMessage, f.ErrorCode);
        return new Face[0];
    }
    // Catch and display all other errors.
    catch (Exception e)
    {
        MessageBox.Show(e.Message, "Error");
        return new Face[0];
    }
}

6단계: 이미지에서 얼굴 표시Step 6: Mark faces in the image

이 단계에서는 이전 단계를 모두 결합하고 이미지에서 감지된 얼굴을 표시합니다.In this step, we combine all the previous steps and mark the detected faces in the image.

MainWindow.xaml.cs에서 BrowseButton_Click 메서드에 'async' 한정자를 추가합니다.In MainWindow.xaml.cs, add the 'async' modifier to the BrowseButton_Click method:

private async void BrowseButton_Click(object sender, RoutedEventArgs e)

BrowseButton_Click 이벤트 처리기의 끝에 다음 코드를 삽입합니다.Insert the following code at the end of the BrowseButton_Click event handler:

// Detect any faces in the image.
Title = "Detecting...";
faces = await UploadAndDetectFaces(filePath);
Title = String.Format("Detection Finished. {0} face(s) detected", faces.Length);

if (faces.Length > 0)
{
    // Prepare to draw rectangles around the faces.
    DrawingVisual visual = new DrawingVisual();
    DrawingContext drawingContext = visual.RenderOpen();
    drawingContext.DrawImage(bitmapSource,
        new Rect(0, 0, bitmapSource.Width, bitmapSource.Height));
    double dpi = bitmapSource.DpiX;
    resizeFactor = 96 / dpi;
    faceDescriptions = new String[faces.Length];

    for (int i = 0; i < faces.Length; ++i)
    {
        Face face = faces[i];

        // Draw a rectangle on the face.
        drawingContext.DrawRectangle(
            Brushes.Transparent,
            new Pen(Brushes.Red, 2),
            new Rect(
                face.FaceRectangle.Left * resizeFactor,
                face.FaceRectangle.Top * resizeFactor,
                face.FaceRectangle.Width * resizeFactor,
                face.FaceRectangle.Height * resizeFactor
                )
        );

        // Store the face description.
        faceDescriptions[i] = FaceDescription(face);
    }

    drawingContext.Close();

    // Display the image with the rectangle around the face.
    RenderTargetBitmap faceWithRectBitmap = new RenderTargetBitmap(
        (int)(bitmapSource.PixelWidth * resizeFactor),
        (int)(bitmapSource.PixelHeight * resizeFactor),
        96,
        96,
        PixelFormats.Pbgra32);

    faceWithRectBitmap.Render(visual);
    FacePhoto.Source = faceWithRectBitmap;

    // Set the status bar text.
    faceDescriptionStatusBar.Text = "Place the mouse pointer over a face to see the face description.";
}

7단계: 이미지의 얼굴 설명Step 7: Describe faces in the image

이 단계에서는 Face 속성을 검사하고 얼굴을 설명하는 문자열을 생성합니다.In this step, we examine the Face properties and generate a string to describe the face. 이 문자열은 마우스 포인터가 얼굴 사각형을 가리킬 때 표시됩니다.This string displays when the mouse pointer hovers over the face rectangle.

MainWindow 클래스에 이 메서드를 추가하여 얼굴 세부 정보를 문자열로 변환합니다.And add this method to the MainWindow class to convert the face details into a string:

// Returns a string that describes the given face.

private string FaceDescription(Face face)
{
    StringBuilder sb = new StringBuilder();

    sb.Append("Face: ");

    // Add the gender, age, and smile.
    sb.Append(face.FaceAttributes.Gender);
    sb.Append(", ");
    sb.Append(face.FaceAttributes.Age);
    sb.Append(", ");
    sb.Append(String.Format("smile {0:F1}%, ", face.FaceAttributes.Smile * 100));

    // Add the emotions. Display all emotions over 10%.
    sb.Append("Emotion: ");
    EmotionScores emotionScores = face.FaceAttributes.Emotion;
    if (emotionScores.Anger     >= 0.1f) sb.Append(String.Format("anger {0:F1}%, ",     emotionScores.Anger * 100));
    if (emotionScores.Contempt  >= 0.1f) sb.Append(String.Format("contempt {0:F1}%, ",  emotionScores.Contempt * 100));
    if (emotionScores.Disgust   >= 0.1f) sb.Append(String.Format("disgust {0:F1}%, ",   emotionScores.Disgust * 100));
    if (emotionScores.Fear      >= 0.1f) sb.Append(String.Format("fear {0:F1}%, ",      emotionScores.Fear * 100));
    if (emotionScores.Happiness >= 0.1f) sb.Append(String.Format("happiness {0:F1}%, ", emotionScores.Happiness * 100));
    if (emotionScores.Neutral   >= 0.1f) sb.Append(String.Format("neutral {0:F1}%, ",   emotionScores.Neutral * 100));
    if (emotionScores.Sadness   >= 0.1f) sb.Append(String.Format("sadness {0:F1}%, ",   emotionScores.Sadness * 100));
    if (emotionScores.Surprise  >= 0.1f) sb.Append(String.Format("surprise {0:F1}%, ",  emotionScores.Surprise * 100));

    // Add glasses.
    sb.Append(face.FaceAttributes.Glasses);
    sb.Append(", ");

    // Add hair.
    sb.Append("Hair: ");

    // Display baldness confidence if over 1%.
    if (face.FaceAttributes.Hair.Bald >= 0.01f)
        sb.Append(String.Format("bald {0:F1}% ", face.FaceAttributes.Hair.Bald * 100));

    // Display all hair color attributes over 10%.
    HairColor[] hairColors = face.FaceAttributes.Hair.HairColor;
    foreach (HairColor hairColor in hairColors)
    {
        if (hairColor.Confidence >= 0.1f)
        {
            sb.Append(hairColor.Color.ToString());
            sb.Append(String.Format(" {0:F1}% ", hairColor.Confidence * 100));
        }
    }

    // Return the built string.
    return sb.ToString();
}

8단계: 얼굴 설명 표시Step 8: Display the face description

FacePhoto_MouseMove 메서드를 다음 코드로 바꿉니다.Replace the FacePhoto_MouseMove method with the following code:

private void FacePhoto_MouseMove(object sender, MouseEventArgs e)
{
    // If the REST call has not completed, return from this method.
    if (faces == null)
        return;

    // Find the mouse position relative to the image.
    Point mouseXY = e.GetPosition(FacePhoto);

    ImageSource imageSource = FacePhoto.Source;
    BitmapSource bitmapSource = (BitmapSource)imageSource;

    // Scale adjustment between the actual size and displayed size.
    var scale = FacePhoto.ActualWidth / (bitmapSource.PixelWidth / resizeFactor);

    // Check if this mouse position is over a face rectangle.
    bool mouseOverFace = false;

    for (int i = 0; i < faces.Length; ++i)
    {
        FaceRectangle fr = faces[i].FaceRectangle;
        double left = fr.Left * scale;
        double top = fr.Top * scale;
        double width = fr.Width * scale;
        double height = fr.Height * scale;

        // Display the face description for this face if the mouse is over this face rectangle.
        if (mouseXY.X >= left && mouseXY.X <= left + width && mouseXY.Y >= top && mouseXY.Y <= top + height)
        {
            faceDescriptionStatusBar.Text = faceDescriptions[i];
            mouseOverFace = true;
            break;
        }
    }

    // If the mouse is not over a face rectangle.
    if (!mouseOverFace)
        faceDescriptionStatusBar.Text = "Place the mouse pointer over a face to see the face description.";
}

이 응용 프로그램을 실행하고 얼굴이 포함된 이미지를 찾습니다.Run this application and browse for an image containing a face. 클라우드 API가 응답할 수 있도록 몇 초 동안 기다립니다.Wait for a few seconds to allow the cloud API to respond. 그러면 이미지의 얼굴에 빨간색 사각형이 표시됩니다.After that, you will see a red rectangle on the faces in the image. 마우스를 얼굴 사각형 위로 이동하면 얼굴에 대한 설명이 상태 표시줄에 나타납니다.By moving the mouse over the face rectangle, the description of the face appears on the status bar:

GettingStartCSharpScreenshot

요약Summary

이 자습서에서는 Face API를 사용하는 기본 프로세스를 배웠으며, 이미지에 얼굴 표식을 표시하는 응용 프로그램을 만들었습니다.In this tutorial, you have learned the basic process for using the Face API and created an application to display face marks in images. Face API 세부 정보에 대한 자세한 내용은 방법 및 API 참조를 참조하세요.For more information about Face API details, see the How-To and API Reference.

전체 소스Full source

여기에는 WPF Windows 응용 프로그램의 전체 소스가 나와 있습니다.The full source for the WPF Windows Application is presented here.

MainWindow.xaml:MainWindow.xaml:

<Window x:Class="FaceTutorial.MainWindow"
         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
         Title="MainWindow" Height="700" Width="960">
    <Grid x:Name="BackPanel">
        <Image x:Name="FacePhoto" Stretch="Uniform" Margin="0,0,0,50" MouseMove="FacePhoto_MouseMove" />
        <DockPanel DockPanel.Dock="Bottom">
            <Button x:Name="BrowseButton" Width="72" Height="20" VerticalAlignment="Bottom" HorizontalAlignment="Left"
                     Content="Browse..."
                     Click="BrowseButton_Click" />
            <StatusBar VerticalAlignment="Bottom">
                <StatusBarItem>
                    <TextBlock Name="faceDescriptionStatusBar" />
                </StatusBarItem>
            </StatusBar>
        </DockPanel>
    </Grid>
</Window>

MainWindow.xaml.cs:MainWindow.xaml.cs:

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using Microsoft.ProjectOxford.Common.Contract;
using Microsoft.ProjectOxford.Face;
using Microsoft.ProjectOxford.Face.Contract;

namespace FaceTutorial
{
    public partial class MainWindow : Window
    {
        // Replace the first parameter with your valid subscription key.
        //
        // Replace or verify the region in the second parameter.
        //
        // You must use the same region in your REST API call as you used to obtain your subscription keys.
        // For example, if you obtained your subscription keys from the westus region, replace
        // "westcentralus" in the URI below with "westus".
        //
        // NOTE: Free trial subscription keys are generated in the westcentralus region, so if you are using
        // a free trial subscription key, you should not need to change this region.
        private readonly IFaceServiceClient faceServiceClient =
            new FaceServiceClient("<Subscription Key>", "https://westcentralus.api.cognitive.microsoft.com/face/v1.0");

        Face[] faces;                   // The list of detected faces.
        String[] faceDescriptions;      // The list of descriptions for the detected faces.
        double resizeFactor;            // The resize factor for the displayed image.

        public MainWindow()
        {
            InitializeComponent();
        }

        // Displays the image and calls Detect Faces.

        private async void BrowseButton_Click(object sender, RoutedEventArgs e)
        {
            // Get the image file to scan from the user.
            var openDlg = new Microsoft.Win32.OpenFileDialog();

            openDlg.Filter = "JPEG Image(*.jpg)|*.jpg";
            bool? result = openDlg.ShowDialog(this);

            // Return if canceled.
            if (!(bool)result)
            {
                return;
            }

            // Display the image file.
            string filePath = openDlg.FileName;

            Uri fileUri = new Uri(filePath);
            BitmapImage bitmapSource = new BitmapImage();

            bitmapSource.BeginInit();
            bitmapSource.CacheOption = BitmapCacheOption.None;
            bitmapSource.UriSource = fileUri;
            bitmapSource.EndInit();

            FacePhoto.Source = bitmapSource;

            // Detect any faces in the image.
            Title = "Detecting...";
            faces = await UploadAndDetectFaces(filePath);
            Title = String.Format("Detection Finished. {0} face(s) detected", faces.Length);

            if (faces.Length > 0)
            {
                // Prepare to draw rectangles around the faces.
                DrawingVisual visual = new DrawingVisual();
                DrawingContext drawingContext = visual.RenderOpen();
                drawingContext.DrawImage(bitmapSource,
                    new Rect(0, 0, bitmapSource.Width, bitmapSource.Height));
                double dpi = bitmapSource.DpiX;
                resizeFactor = 96 / dpi;
                faceDescriptions = new String[faces.Length];

                for (int i = 0; i < faces.Length; ++i)
                {
                    Face face = faces[i];

                    // Draw a rectangle on the face.
                    drawingContext.DrawRectangle(
                        Brushes.Transparent,
                        new Pen(Brushes.Red, 2),
                        new Rect(
                            face.FaceRectangle.Left * resizeFactor,
                            face.FaceRectangle.Top * resizeFactor,
                            face.FaceRectangle.Width * resizeFactor,
                            face.FaceRectangle.Height * resizeFactor
                            )
                    );

                    // Store the face description.
                    faceDescriptions[i] = FaceDescription(face);
                }

                drawingContext.Close();

                // Display the image with the rectangle around the face.
                RenderTargetBitmap faceWithRectBitmap = new RenderTargetBitmap(
                    (int)(bitmapSource.PixelWidth * resizeFactor),
                    (int)(bitmapSource.PixelHeight * resizeFactor),
                    96,
                    96,
                    PixelFormats.Pbgra32);

                faceWithRectBitmap.Render(visual);
                FacePhoto.Source = faceWithRectBitmap;

                // Set the status bar text.
                faceDescriptionStatusBar.Text = "Place the mouse pointer over a face to see the face description.";
            }
        }

        // Displays the face description when the mouse is over a face rectangle.

        private void FacePhoto_MouseMove(object sender, MouseEventArgs e)
        {
            // If the REST call has not completed, return from this method.
            if (faces == null)
                return;

            // Find the mouse position relative to the image.
            Point mouseXY = e.GetPosition(FacePhoto);

            ImageSource imageSource = FacePhoto.Source;
            BitmapSource bitmapSource = (BitmapSource)imageSource;

            // Scale adjustment between the actual size and displayed size.
            var scale = FacePhoto.ActualWidth / (bitmapSource.PixelWidth / resizeFactor);

            // Check if this mouse position is over a face rectangle.
            bool mouseOverFace = false;

            for (int i = 0; i < faces.Length; ++i)
            {
                FaceRectangle fr = faces[i].FaceRectangle;
                double left = fr.Left * scale;
                double top = fr.Top * scale;
                double width = fr.Width * scale;
                double height = fr.Height * scale;

                // Display the face description for this face if the mouse is over this face rectangle.
                if (mouseXY.X >= left && mouseXY.X <= left + width && mouseXY.Y >= top && mouseXY.Y <= top + height)
                {
                    faceDescriptionStatusBar.Text = faceDescriptions[i];
                    mouseOverFace = true;
                    break;
                }
            }

            // If the mouse is not over a face rectangle.
            if (!mouseOverFace)
                faceDescriptionStatusBar.Text = "Place the mouse pointer over a face to see the face description.";
        }

        // Uploads the image file and calls Detect Faces.

        private async Task<Face[]> UploadAndDetectFaces(string imageFilePath)
        {
            // The list of Face attributes to return.
            IEnumerable<FaceAttributeType> faceAttributes =
                new FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Emotion, FaceAttributeType.Glasses, FaceAttributeType.Hair };

            // Call the Face API.
            try
            {
                using (Stream imageFileStream = File.OpenRead(imageFilePath))
                {
                    Face[] faces = await faceServiceClient.DetectAsync(imageFileStream, returnFaceId: true, returnFaceLandmarks: false, returnFaceAttributes: faceAttributes);
                    return faces;
                }
            }
            // Catch and display Face API errors.
            catch (FaceAPIException f)
            {
                MessageBox.Show(f.ErrorMessage, f.ErrorCode);
                return new Face[0];
            }
            // Catch and display all other errors.
            catch (Exception e)
            {
                MessageBox.Show(e.Message, "Error");
                return new Face[0];
            }
        }

        // Returns a string that describes the given face.

        private string FaceDescription(Face face)
        {
            StringBuilder sb = new StringBuilder();

            sb.Append("Face: ");

            // Add the gender, age, and smile.
            sb.Append(face.FaceAttributes.Gender);
            sb.Append(", ");
            sb.Append(face.FaceAttributes.Age);
            sb.Append(", ");
            sb.Append(String.Format("smile {0:F1}%, ", face.FaceAttributes.Smile * 100));

            // Add the emotions. Display all emotions over 10%.
            sb.Append("Emotion: ");
            EmotionScores emotionScores = face.FaceAttributes.Emotion;
            if (emotionScores.Anger >= 0.1f) sb.Append(String.Format("anger {0:F1}%, ", emotionScores.Anger * 100));
            if (emotionScores.Contempt >= 0.1f) sb.Append(String.Format("contempt {0:F1}%, ", emotionScores.Contempt * 100));
            if (emotionScores.Disgust >= 0.1f) sb.Append(String.Format("disgust {0:F1}%, ", emotionScores.Disgust * 100));
            if (emotionScores.Fear >= 0.1f) sb.Append(String.Format("fear {0:F1}%, ", emotionScores.Fear * 100));
            if (emotionScores.Happiness >= 0.1f) sb.Append(String.Format("happiness {0:F1}%, ", emotionScores.Happiness * 100));
            if (emotionScores.Neutral >= 0.1f) sb.Append(String.Format("neutral {0:F1}%, ", emotionScores.Neutral * 100));
            if (emotionScores.Sadness >= 0.1f) sb.Append(String.Format("sadness {0:F1}%, ", emotionScores.Sadness * 100));
            if (emotionScores.Surprise >= 0.1f) sb.Append(String.Format("surprise {0:F1}%, ", emotionScores.Surprise * 100));

            // Add glasses.
            sb.Append(face.FaceAttributes.Glasses);
            sb.Append(", ");

            // Add hair.
            sb.Append("Hair: ");

            // Display baldness confidence if over 1%.
            if (face.FaceAttributes.Hair.Bald >= 0.01f)
                sb.Append(String.Format("bald {0:F1}% ", face.FaceAttributes.Hair.Bald * 100));

            // Display all hair color attributes over 10%.
            HairColor[] hairColors = face.FaceAttributes.Hair.HairColor;
            foreach (HairColor hairColor in hairColors)
            {
                if (hairColor.Confidence >= 0.1f)
                {
                    sb.Append(hairColor.Color.ToString());
                    sb.Append(String.Format(" {0:F1}% ", hairColor.Confidence * 100));
                }
            }

            // Return the built string.
            return sb.ToString();
        }
    }
}