Tutorial: Create an Android app to detect and frame faces in an image

In this tutorial, you will create a simple Android application that uses the Azure Face API, through the Java SDK, to detect human faces in an image. The application displays a selected image and draws a frame around each detected face.

This tutorial shows you how to:

  • Create an Android application
  • Install the Face API client library
  • Use the client library to detect faces in an image
  • Draw a frame around each detected face

Android screenshot of a photo with faces framed by a red rectangle

The complete sample code is available in the Cognitive Services Face Android repository on GitHub.

If you don't have an Azure subscription, create a free account before you begin.


Create the Android Studio project

Follow these steps to create a new Android application project.

  1. In Android Studio, select Start a new Android Studio project.
  2. On the Create Android Project screen, modify the default fields, if necessary, then click Next.
  3. On the Target Android Devices screen, use the dropdown selector to choose API 22 or later, then click Next.
  4. Select Empty Activity, then click Next.
  5. Uncheck Backwards Compatibility, then click Finish.

Add the initial code

Create the UI

Open activity_main.xml. In the Layout Editor, select the Text tab, then replace the contents with the following code.

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools" tools:context=".MainActivity"
    android:layout_width="match_parent" android:layout_height="match_parent">

        android:contentDescription="Image with faces to analyze"/>

        android:text="Browse for a face image"

Create the main class

Open MainActivity.java and replace the existing import statements with the following code.

import java.io.*;
import android.app.*;
import android.content.*;
import android.net.*;
import android.os.*;
import android.view.*;
import android.graphics.*;
import android.widget.*;
import android.provider.*;

Then, replace the contents of the MainActivity class with the following code. This creates an event handler on the Button that starts a new activity to allow the user to select a picture. It displays the picture in the ImageView.

private final int PICK_IMAGE = 1;
private ProgressDialog detectionProgressDialog;

protected void onCreate(Bundle savedInstanceState) {
    Button button1 = findViewById(R.id.button1);
    button1.setOnClickListener(new View.OnClickListener() {
        public void onClick(View v) {
            Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
                    intent, "Select Picture"), PICK_IMAGE);

    detectionProgressDialog = new ProgressDialog(this);

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if (requestCode == PICK_IMAGE && resultCode == RESULT_OK &&
            data != null && data.getData() != null) {
        Uri uri = data.getData();
        try {
            Bitmap bitmap = MediaStore.Images.Media.getBitmap(
                    getContentResolver(), uri);
            ImageView imageView = findViewById(R.id.imageView1);

            // Comment out for tutorial
        } catch (IOException e) {

Try the app

Comment out the call to detectAndFrame in the onActivityResult method. Then press Run on the menu to test your app. When the app opens, either in an emulator or a connected device, click the Browse on the bottom. The device's file selection dialog should appear. Choose an image and verify that it displays in the window. Then, close the app and advance to the next step.

Android screenshot of a photo with faces

Add the Face SDK

Add the Gradle dependency

In the Project pane, use the dropdown selector to select Android. Expand Gradle Scripts, then open build.gradle (Module: app). Add a dependency for the Face client library, com.microsoft.projectoxford:face:1.4.3, as shown in the screenshot below, then click Sync Now.

Android Studio screenshot of App build.gradle file

Go back to MainActivity.java and add the following import statements:

import com.microsoft.projectoxford.face.*;
import com.microsoft.projectoxford.face.contract.*;

Then, insert the following code in the MainActivity class, above the onCreate method:

// Replace `<API endpoint>` with the Azure region associated with
// your subscription key. For example,
// apiEndpoint = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0"
private final String apiEndpoint = "<API endpoint>";

// Replace `<Subscription Key>` with your subscription key.
// For example, subscriptionKey = "0123456789abcdef0123456789ABCDEF"
private final String subscriptionKey = "<Subscription Key>";

private final FaceServiceClient faceServiceClient =
        new FaceServiceRestClient(apiEndpoint, subscriptionKey);

You will need to replace <Subscription Key> with your subscription key. Also, replace <API endpoint> with your Face API endpoint, using the appropriate region identifier for your key (see the Face API docs for a list of all region endpoints). Free trial subscription keys are generated in the westus region.

In the Project pane, expand app, then manifests, and open AndroidManifest.xml. Insert the following element as a direct child of the manifest element:

<uses-permission android:name="android.permission.INTERNET" />

Upload image and detect faces

Your app will detect faces by calling the faceClient.Face.DetectWithStreamAsync method, which wraps the Detect REST API and returns a list of Face instances.

Each returned Face includes a rectangle to indicate its location, combined with a series of optional face attributes. In this example, only the face rectangles are requested.

Insert the following two methods into the MainActivity class. Note that when face detection completes, the app calls the drawFaceRectanglesOnBitmap method to modify the ImageView. You will define this method next.

// Detect faces by uploading a face image.
// Frame faces after detection.
private void detectAndFrame(final Bitmap imageBitmap) {
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    imageBitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream);
    ByteArrayInputStream inputStream =
            new ByteArrayInputStream(outputStream.toByteArray());

    AsyncTask<InputStream, String, Face[]> detectTask =
            new AsyncTask<InputStream, String, Face[]>() {
                String exceptionMessage = "";

                protected Face[] doInBackground(InputStream... params) {
                    try {
                        Face[] result = faceServiceClient.detect(
                                true,         // returnFaceId
                                false,        // returnFaceLandmarks
                                null          // returnFaceAttributes:
                                /* new FaceServiceClient.FaceAttributeType[] {
                                    FaceServiceClient.FaceAttributeType.Gender }
                        if (result == null){
                                    "Detection Finished. Nothing detected");
                            return null;
                                "Detection Finished. %d face(s) detected",
                        return result;
                    } catch (Exception e) {
                        exceptionMessage = String.format(
                                "Detection failed: %s", e.getMessage());
                        return null;

                protected void onPreExecute() {
                    //TODO: show progress dialog
                protected void onProgressUpdate(String... progress) {
                    //TODO: update progress
                protected void onPostExecute(Face[] result) {
                    //TODO: update face frames

                    if (result == null) return;

                    ImageView imageView = findViewById(R.id.imageView1);
                            drawFaceRectanglesOnBitmap(imageBitmap, result));


private void showError(String message) {
    new AlertDialog.Builder(this)
            .setPositiveButton("OK", new DialogInterface.OnClickListener() {
                public void onClick(DialogInterface dialog, int id) {

Draw face rectangles

Insert the following helper method into the MainActivity class. This method draws a rectangle around each detected face, using the rectangle coordinates of each Face instance.

private static Bitmap drawFaceRectanglesOnBitmap(
        Bitmap originalBitmap, Face[] faces) {
    Bitmap bitmap = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
    Canvas canvas = new Canvas(bitmap);
    Paint paint = new Paint();
    if (faces != null) {
        for (Face face : faces) {
            FaceRectangle faceRectangle = face.faceRectangle;
                    faceRectangle.left + faceRectangle.width,
                    faceRectangle.top + faceRectangle.height,
    return bitmap;

Finally, uncomment the call to the detectAndFrame method in onActivityResult.

Run the app

Run the application and browse for an image with a face. Wait a few seconds to allow the Face service to respond. You should see a red rectangle on each of the faces in the image.

Android screenshot of faces with red rectangles drawn around them

Next steps

In this tutorial, you learned the basic process for using the Face API Java SDK and created an application to detect and frame faces in an image. Next, learn more about the details of face detection.