Quickstart: Detect faces in an image using the REST API and Java

In this quickstart, you'll use the Azure Face REST API with Java to detect human faces in an image.

If you don't have an Azure subscription, create a free account before you begin.


  • Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, click Go to resource.
    • You will need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • Any Java IDE of your choice.

Create the Java project

  1. Create a new command-line Java app in your IDE and add a Main class with a main method.
  2. Import the following libraries into your Java project. If you're using Maven, the Maven coordinates are provided for each library.

Add face detection code

Open the main class of your project. Here, you will add the code needed to load images and detect faces.

Import packages

Add the following import statements to the top of the file.

// This sample uses Apache HttpComponents:
// http://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/
// https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/

import java.net.URI;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
import org.json.JSONArray;
import org.json.JSONObject;

Add essential fields

Replace the Main class with the following code. This data specifies how to connect to the Face service and where to get the input data. You'll need to update the subscriptionKey field with the value of your subscription key, and change the uriBase string so that it contains the correct endpoint string. You may also wish to set the imageWithFaces value to a path that points to a different image file.


New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.

The faceAttributes field is simply a list of certain types of attributes. It will specify which information to retrieve about the detected faces.

 * To compile and run, enter the following at a command prompt:
 *   javac Detect.java -cp .;lib\*
 *   java -cp .;lib\* Detect
public class Detect {
    private static final String subscriptionKey = System.getenv("FACE_SUBSCRIPTION_KEY");
    private static final String endpoint = System.getenv("FACE_ENDPOINT");

    private static final String imageWithFaces =

Call the face detection REST API

Add the main method with the following code. It constructs a REST call to the Face API to detect face information in the remote image (the faceAttributes string specifies which face attributes to retrieve). Then it writes the output data to a JSON string.

public static void main(String[] args) {
    HttpClient httpclient = HttpClientBuilder.create().build();

        URIBuilder builder = new URIBuilder(endpoint + "/face/v1.0/detect");

        // Request parameters. All of them are optional.
        builder.setParameter("detectionModel", "detection_02");
        builder.setParameter("returnFaceId", "true");

        // Prepare the URI for the REST API call.
        URI uri = builder.build();
        HttpPost request = new HttpPost(uri);

        // Request headers.
        request.setHeader("Content-Type", "application/json");
        request.setHeader("Ocp-Apim-Subscription-Key", subscriptionKey);

        // Request body.
        StringEntity reqEntity = new StringEntity(imageWithFaces);

        // Execute the REST API call and get the response entity.
        HttpResponse response = httpclient.execute(request);
        HttpEntity entity = response.getEntity();

Parse the JSON response

Directly below the previous code, add the following block, which converts the returned JSON data into a more easily readable format before printing it to the console. Finally, close out the try-catch block, the main method, and the Main class.

            if (entity != null)
                // Format and display the JSON response.
                System.out.println("REST Response:\n");

                String jsonString = EntityUtils.toString(entity).trim();
                if (jsonString.charAt(0) == '[') {
                    JSONArray jsonArray = new JSONArray(jsonString);
                else if (jsonString.charAt(0) == '{') {
                    JSONObject jsonObject = new JSONObject(jsonString);
                } else {
        catch (Exception e)
            // Display error message.

Run the app

Compile the code and run it. A successful response will display Face data in easily readable JSON format in the console window. For example:

  "faceRectangle": {
    "top": 131,
    "left": 177,
    "width": 162,
    "height": 162

Extract Face Attributes

To extract face attributes, use detection model 1 and add the returnFaceAttributes query parameter.

builder.setParameter("detectionModel", "detection_01");
builder.setParameter("returnFaceAttributes", "age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise");

The response now includes face attributes. For example:

  "faceRectangle": {
    "top": 131,
    "left": 177,
    "width": 162,
    "height": 162
  "faceAttributes": {
    "makeup": {
      "eyeMakeup": true,
      "lipMakeup": true
    "facialHair": {
      "sideburns": 0,
      "beard": 0,
      "moustache": 0
    "gender": "female",
    "accessories": [],
    "blur": {
      "blurLevel": "low",
      "value": 0.06
    "headPose": {
      "roll": 0.1,
      "pitch": 0,
      "yaw": -32.9
    "smile": 0,
    "glasses": "NoGlasses",
    "hair": {
      "bald": 0,
      "invisible": false,
      "hairColor": [
          "color": "brown",
          "confidence": 1
          "color": "black",
          "confidence": 0.87
          "color": "other",
          "confidence": 0.51
          "color": "blond",
          "confidence": 0.08
          "color": "red",
          "confidence": 0.08
          "color": "gray",
          "confidence": 0.02
    "emotion": {
      "contempt": 0,
      "surprise": 0.005,
      "happiness": 0,
      "neutral": 0.986,
      "sadness": 0.009,
      "disgust": 0,
      "anger": 0,
      "fear": 0
    "exposure": {
      "value": 0.67,
      "exposureLevel": "goodExposure"
    "occlusion": {
      "eyeOccluded": false,
      "mouthOccluded": false,
      "foreheadOccluded": false
    "noise": {
      "noiseLevel": "low",
      "value": 0
    "age": 22.9
  "faceId": "49d55c17-e018-4a42-ba7b-8cbbdfae7c6f"

Next steps

In this quickstart, you created a simple Java console application that uses REST calls to the Azure Face API to detect faces in an image and return their attributes. Next, explore the Face API reference documentation to learn more about the supported scenarios.