Редактиране

Споделяне чрез


Quickstart: Use the Content Moderator client library

Important

Azure Content Moderator is deprecated as of February 2024 and will be retired by February 2027. It is replaced by Azure AI Content Safety, which offers advanced AI features and enhanced performance.

Azure AI Content Safety is a comprehensive solution designed to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety is suitable for many scenarios such as online marketplaces, gaming companies, social messaging platforms, enterprise media companies, and K-12 education solution providers. Here's an overview of its features and capabilities:

  • Text and Image Detection APIs: Scan text and images for sexual content, violence, hate, and self-harm with multiple severity levels.
  • Content Safety Studio: An online tool designed to handle potentially offensive, risky, or undesirable content using our latest content moderation ML models. It provides templates and customized workflows that enable users to build their own content moderation systems.
  • Language support: Azure AI Content Safety supports more than 100 languages and is specifically trained on English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese.

Azure AI Content Safety provides a robust and flexible solution for your content moderation needs. By switching from Content Moderator to Azure AI Content Safety, you can take advantage of the latest tools and technologies to ensure that your content is always moderated to your exact specifications.

Learn more about Azure AI Content Safety and explore how it can elevate your content moderation strategy.

Get started with the Azure Content Moderator client library for .NET. Follow these steps to install the NuGet package and try out the example code for basic tasks.

Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. Use the AI-powered content moderation service to scan text, image, and videos and apply content flags automatically. Build content filtering software into your app to comply with regulations or maintain the intended environment for your users.

Use the Content Moderator client library for .NET to:

  • Moderate text
  • Moderate images

Reference documentation | Library source code | Package (NuGet) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The Visual Studio IDE or current version of .NET Core.
  • Once you have your Azure subscription, create a Content Moderator resource in the Azure portal to get your key and endpoint. Wait for it to deploy and click the Go to resource button.
    • You will need the key and endpoint from the resource you create to connect your application to Content Moderator. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Create a new C# application

Using Visual Studio, create a new .NET Core application.

Install the client library

Once you've created a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, check Include prerelease, and search for Microsoft.Azure.CognitiveServices.ContentModerator. Select version 2.0.0, and then Install.

Tip

Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.

From the project directory, open the Program.cs file in your preferred editor or IDE. Add the following using statements:

using Microsoft.Azure.CognitiveServices.ContentModerator;
using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading;

In the Program class, create variables for your resource's key and endpoint.

Important

Go to the Azure portal. If the Content Moderator resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

// Your Content Moderator subscription key is found in your Azure portal resource on the 'Keys' page.
private static readonly string SubscriptionKey = "PASTE_YOUR_CONTENT_MODERATOR_SUBSCRIPTION_KEY_HERE";
// Base endpoint URL. Found on 'Overview' page in Azure resource. For example: https://westus.api.cognitive.microsoft.com
private static readonly string Endpoint = "PASTE_YOUR_CONTENT_MODERATOR_ENDPOINT_HERE";

Important

Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like Azure Key Vault. See the Azure AI services security article for more information.

In the application's main() method, add calls for the methods used in this quickstart. You will create these later.

// Create an image review client
ContentModeratorClient clientImage = Authenticate(SubscriptionKey, Endpoint);
// Create a text review client
ContentModeratorClient clientText = Authenticate(SubscriptionKey, Endpoint);
// Create a human reviews client
ContentModeratorClient clientReviews = Authenticate(SubscriptionKey, Endpoint);
// Moderate text from text in a file
ModerateText(clientText, TextFile, TextOutputFile);
// Moderate images from list of image URLs
ModerateImages(clientImage, ImageUrlFile, ImageOutputFile);

Object model

The following classes handle some of the major features of the Content Moderator .NET client library.

Name Description
ContentModeratorClient This class is needed for all Content Moderator functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
ImageModeration This class provides the functionality for analyzing images for adult content, personal information, or human faces.
TextModeration This class provides the functionality for analyzing text for language, profanity, errors, and personal information.

Code examples

These code snippets show you how to do the following tasks with the Content Moderator client library for .NET:

Authenticate the client

In a new method, instantiate client objects with your endpoint and key.

public static ContentModeratorClient Authenticate(string key, string endpoint)
{
    ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(key));
    client.Endpoint = endpoint;

    return client;
}

Moderate text

The following code uses a Content Moderator client to analyze a body of text and print the results to the console. In the root of your Program class, define input and output files:

// TEXT MODERATION
// Name of the file that contains text
private static readonly string TextFile = "TextFile.txt";
// The name of the file to contain the output from the evaluation.
private static string TextOutputFile = "TextModerationOutput.txt";

Then at the root of your project, add a TextFile.txt file. Add your own text to this file, or use the following sample text:

Is this a grabage email abcdef@abcd.com, phone: 4255550111, IP: 255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
<offensive word> is the profanity here. Is this information PII? phone 4255550111

Then define the text moderation method somewhere in your Program class:

/*
 * TEXT MODERATION
 * This example moderates text from file.
 */
public static void ModerateText(ContentModeratorClient client, string inputFile, string outputFile)
{
    Console.WriteLine("--------------------------------------------------------------");
    Console.WriteLine();
    Console.WriteLine("TEXT MODERATION");
    Console.WriteLine();
    // Load the input text.
    string text = File.ReadAllText(inputFile);

    // Remove carriage returns
    text = text.Replace(Environment.NewLine, " ");
    // Convert string to a byte[], then into a stream (for parameter in ScreenText()).
    byte[] textBytes = Encoding.UTF8.GetBytes(text);
    MemoryStream stream = new MemoryStream(textBytes);

    Console.WriteLine("Screening {0}...", inputFile);
    // Format text

    // Save the moderation results to a file.
    using (StreamWriter outputWriter = new StreamWriter(outputFile, false))
    {
        using (client)
        {
            // Screen the input text: check for profanity, classify the text into three categories,
            // do autocorrect text, and check for personally identifying information (PII)
            outputWriter.WriteLine("Autocorrect typos, check for matching terms, PII, and classify.");

            // Moderate the text
            var screenResult = client.TextModeration.ScreenText("text/plain", stream, "eng", true, true, null, true);
            outputWriter.WriteLine(JsonConvert.SerializeObject(screenResult, Formatting.Indented));
        }

        outputWriter.Flush();
        outputWriter.Close();
    }

    Console.WriteLine("Results written to {0}", outputFile);
    Console.WriteLine();
}

Moderate images

The following code uses a Content Moderator client, along with an ImageModeration object, to analyze remote images for adult and racy content.

Note

You can also analyze the content of a local image. See the reference documentation for methods and operations that work with local images.

Get sample images

Define your input and output files at the root of your Program class:

// IMAGE MODERATION
//The name of the file that contains the image URLs to evaluate.
private static readonly string ImageUrlFile = "ImageFiles.txt";
// The name of the file to contain the output from the evaluation.
private static string ImageOutputFile = "ImageModerationOutput.json";

Then create the input file, ImageFiles.txt, at the root of your project. In this file, you add the URLs of images to analyze—one URL on each line. You can use the following sample images:

https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg
https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png

Define helper class

Add the following class definition within the Program class. This inner class will handle image moderation results.

// Contains the image moderation results for an image, 
// including text and face detection results.
public class EvaluationData
{
    // The URL of the evaluated image.
    public string ImageUrl;

    // The image moderation results.
    public Evaluate ImageModeration;

    // The text detection results.
    public OCR TextDetection;

    // The face detection results;
    public FoundFaces FaceDetection;
}

Define the image moderation method

The following method iterates through the image URLs in a text file, creates an EvaluationData instance, and analyzes the image for adult/racy content, text, and human faces. Then it adds the final EvaluationData instance to a list and writes the complete list of returned data to the console.

Iterate through images

/*
 * IMAGE MODERATION
 * This example moderates images from URLs.
 */
public static void ModerateImages(ContentModeratorClient client, string urlFile, string outputFile)
{
    Console.WriteLine("--------------------------------------------------------------");
    Console.WriteLine();
    Console.WriteLine("IMAGE MODERATION");
    Console.WriteLine();
    // Create an object to store the image moderation results.
    List<EvaluationData> evaluationData = new List<EvaluationData>();

    using (client)
    {
        // Read image URLs from the input file and evaluate each one.
        using (StreamReader inputReader = new StreamReader(urlFile))
        {
            while (!inputReader.EndOfStream)
            {
                string line = inputReader.ReadLine().Trim();
                if (line != String.Empty)
                {
                    Console.WriteLine("Evaluating {0}...", Path.GetFileName(line));
                    var imageUrl = new BodyModel("URL", line.Trim());

Analyze content

For more information on the image attributes that Content Moderator screens for, see the Image moderation concepts guide.

            var imageData = new EvaluationData
            {
                ImageUrl = imageUrl.Value,

                // Evaluate for adult and racy content.
                ImageModeration =
                client.ImageModeration.EvaluateUrlInput("application/json", imageUrl, true)
            };
            Thread.Sleep(1000);

            // Detect and extract text.
            imageData.TextDetection =
                client.ImageModeration.OCRUrlInput("eng", "application/json", imageUrl, true);
            Thread.Sleep(1000);

            // Detect faces.
            imageData.FaceDetection =
                client.ImageModeration.FindFacesUrlInput("application/json", imageUrl, true);
            Thread.Sleep(1000);

            // Add results to Evaluation object
            evaluationData.Add(imageData);
        }
    }
}

Write moderation results to file

        // Save the moderation results to a file.
        using (StreamWriter outputWriter = new StreamWriter(outputFile, false))
        {
            outputWriter.WriteLine(JsonConvert.SerializeObject(
                evaluationData, Formatting.Indented));

            outputWriter.Flush();
            outputWriter.Close();
        }
        Console.WriteLine();
        Console.WriteLine("Image moderation results written to output file: " + outputFile);
        Console.WriteLine();
    }
}

Run the application

Run the application by clicking the Debug button at the top of the IDE window.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Content Moderator .NET library to do moderation tasks. Next, learn more about the moderation of images or other media by reading a conceptual guide.

Get started with the Azure Content Moderator client library for Java. Follow these steps to install the Maven package and try out the example code for basic tasks.

Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. Use the AI-powered content moderation service to scan text, image, and videos and apply content flags automatically. Build content filtering software into your app to comply with regulations or maintain the intended environment for your users.

Use the Content Moderator client library for Java to:

  • Moderate text
  • Moderate images

Reference documentation | Library source code |Artifact (Maven) | Samples

Prerequisites

  • An Azure subscription - Create one for free
  • The current version of the Java Development Kit (JDK)
  • The Gradle build tool, or another dependency manager.
  • Once you have your Azure subscription, create a Content Moderator resource in the Azure portal to get your key and endpoint. Wait for it to deploy and click the Go to resource button.
    • You will need the key and endpoint from the resource you create to connect your application to Content Moderator. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Create a new Gradle project

In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

mkdir myapp && cd myapp

Run the gradle init command from your working directory. This command will create essential build files for Gradle, including build.gradle.kts, which is used at runtime to create and configure your application.

gradle init --type basic

When prompted to choose a DSL, select Kotlin.

Install the client library

Find build.gradle.kts and open it with your preferred IDE or text editor. Then copy in the following build configuration. This configuration defines the project as a Java application whose entry point is the class ContentModeratorQuickstart. It imports the Content Moderator client library and the GSON sdk for JSON serialization.

plugins {
    java
    application
}

application{ 
    mainClassName = "ContentModeratorQuickstart"
}

repositories{
    mavenCentral()
}

dependencies{
    compile(group = "com.microsoft.azure.cognitiveservices", name = "azure-cognitiveservices-contentmoderator", version = "1.0.2-beta")
    compile(group = "com.google.code.gson", name = "gson", version = "2.8.5")
}

Create a Java file

From your working directory, run the following command to create a project source folder:

mkdir -p src/main/java

Navigate to the new folder and create a file called ContentModeratorQuickstart.java. Open it in your preferred editor or IDE and add the following import statements:

import com.google.gson.*;

import com.microsoft.azure.cognitiveservices.vision.contentmoderator.*;
import com.microsoft.azure.cognitiveservices.vision.contentmoderator.models.*;

import java.io.*;
import java.util.*;
import java.util.concurrent.*;

Tip

Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.

In the application's ContentModeratorQuickstart class, create variables for your resource's key and endpoint.

Important

Go to the Azure portal. If the Content Moderator resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

private static final String subscriptionKey = "<your-subscription-key>";
private static final String endpoint = "<your-api-endpoint>";

Important

Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like Azure Key Vault. See the Azure AI services security article for more information.

In the application's main method, add calls for the methods used in this quickstart. You'll define these methods later.

// Create a List in which to store the image moderation results.
List<EvaluationData> evaluationData = new ArrayList<EvaluationData>();

// Moderate URL images
moderateImages(client, evaluationData);
// Moderate text from file
moderateText(client);
// Create a human review
humanReviews(client);

Object model

The following classes handle some of the major features of the Content Moderator Java client library.

Name Description
ContentModeratorClient This class is needed for all Content Moderator functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
ImageModeration This class provides the functionality for analyzing images for adult content, personal information, or human faces.
TextModerations This class provides the functionality for analyzing text for language, profanity, errors, and personal information.

Code examples

These code snippets show you how to do the following tasks with the Content Moderator client library for Java:

Authenticate the client

In the application's main method, create a ContentModeratorClient object using your subscription endpoint value and subscription key.

// Set CONTENT_MODERATOR_SUBSCRIPTION_KEY in your environment settings, with
// your key as its value.
// Set COMPUTER_MODERATOR_ENDPOINT in your environment variables with your Azure
// endpoint.
ContentModeratorClient client = ContentModeratorManager.authenticate(AzureRegionBaseUrl.fromString(endpoint),
        "CONTENT_MODERATOR_SUBSCRIPTION_KEY");

Moderate text

Set up sample text

At the top of your ContentModeratorQuickstart class, define a reference to a local text file. Add a .txt file to your project directory and enter the text you'd like to analyze.

// TEXT MODERATION variable
private static File textFile = new File("src\\main\\resources\\TextModeration.txt");

Analyze text

Create a new method that reads the .txt file and calls the screenText method on each line.

public static void moderateText(ContentModeratorClient client) {
    System.out.println("---------------------------------------");
    System.out.println("MODERATE TEXT");
    System.out.println();

    try (BufferedReader inputStream = new BufferedReader(new FileReader(textFile))) {
        String line;
        Screen textResults = null;
        // For formatting the printed results
        Gson gson = new GsonBuilder().setPrettyPrinting().create();

        while ((line = inputStream.readLine()) != null) {
            if (line.length() > 0) {
                textResults = client.textModerations().screenText("text/plain", line.getBytes(), null);
                // Uncomment below line to print in console
                // System.out.println(gson.toJson(textResults).toString());
            }
        }

Add the following code to print the moderation results to a .json file in your project directory.

System.out.println("Text moderation status: " + textResults.status().description());
System.out.println();

// Create output results file to TextModerationOutput.json
BufferedWriter writer = new BufferedWriter(
        new FileWriter(new File("src\\main\\resources\\TextModerationOutput.json")));
writer.write(gson.toJson(textResults).toString());
System.out.println("Check TextModerationOutput.json to see printed results.");
System.out.println();
writer.close();

Close out the try and catch statement to complete the method.

    } catch (Exception e) {
        System.out.println(e.getMessage());
        e.printStackTrace();
    }
}

Moderate images

Set up sample image

In a new method, construct a BodyModelModel object with a given URL string that points to an image.

public static void moderateImages(ContentModeratorClient client, List<EvaluationData> resultsList) {
    System.out.println();
    System.out.println("---------------------------------------");
    System.out.println("MODERATE IMAGES");
    System.out.println();

    try {
        String urlString = "https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg";
        // Evaluate each line of text
        BodyModelModel url = new BodyModelModel();
        url.withDataRepresentation("URL");
        url.withValue(urlString);
        // Save to EvaluationData class for later
        EvaluationData imageData = new EvaluationData();
        imageData.ImageUrl = url.value();

Define helper class

Then, in your ContentModeratorQuickstart.java file, add the following class definition inside the ContentModeratorQuickstart class. This inner class is used in the image moderation process.

// Contains the image moderation results for an image, including text and face
// detection from the image.
public static class EvaluationData {
    // The URL of the evaluated image.
    public String ImageUrl;
    // The image moderation results.
    public Evaluate ImageModeration;
    // The text detection results.
    public OCR TextDetection;
    // The face detection results;
    public FoundFaces FaceDetection;
}

Analyze content

This line of code checks the image at the given URL for adult or racy content. See the Image moderation conceptual guide for information on these terms.

// Evaluate for adult and racy content.
imageData.ImageModeration = client.imageModerations().evaluateUrlInput("application/json", url,
        new EvaluateUrlInputOptionalParameter().withCacheImage(true));
Thread.sleep(1000);

Check for text

This line of code checks the image for visible text.

// Detect and extract text from image.
imageData.TextDetection = client.imageModerations().oCRUrlInput("eng", "application/json", url,
        new OCRUrlInputOptionalParameter().withCacheImage(true));
Thread.sleep(1000);

Check for faces

This line of code checks the image for human faces.

// Detect faces.
imageData.FaceDetection = client.imageModerations().findFacesUrlInput("application/json", url,
        new FindFacesUrlInputOptionalParameter().withCacheImage(true));
Thread.sleep(1000);

Finally, store the returned information in the EvaluationData list.

resultsList.add(imageData);

After the while loop, add the following code, which prints the results to the console and to an output file, src/main/resources/ModerationOutput.json.

// Save the moderation results to a file.
// ModerationOutput.json contains the output from the evaluation.
// Relative paths are relative to the execution directory (where pom.xml is
// located).
BufferedWriter writer = new BufferedWriter(
        new FileWriter(new File("src\\main\\resources\\ImageModerationOutput.json")));
// For formatting the printed results
Gson gson = new GsonBuilder().setPrettyPrinting().create();

writer.write(gson.toJson(resultsList).toString());
System.out.println("Check ImageModerationOutput.json to see printed results.");
writer.close();

Close out the try statement and add a catch statement to complete the method.

} catch (Exception e) {
    System.out.println(e.getMessage());
    e.printStackTrace();
}

Run the application

You can build the app with:

gradle build

Run the application with the gradle run command:

gradle run

Then navigate to the src/main/resources/ModerationOutput.json file and view the results of your content moderation.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Content Moderator Java library to perform moderation tasks. Next, learn more about the moderation of images or other media by reading a conceptual guide.

Get started with the Azure Content Moderator client library for Python. Follow these steps to install the PiPy package and try out the example code for basic tasks.

Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. Use the AI-powered content moderation service to scan text, image, and videos and apply content flags automatically. Build content filtering software into your app to comply with regulations or maintain the intended environment for your users.

Use the Content Moderator client library for Python to:

  • Moderate text
  • Use a custom terms list
  • Moderate images
  • Use a custom image list

Reference documentation | Library source code | Package (PiPy) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • Python 3.x
    • Your Python installation should include pip. You can check if you have pip installed by running pip --version on the command line. Get pip by installing the latest version of Python.
  • Once you have your Azure subscription, create a Content Moderator resource in the Azure portal to get your key and endpoint. Wait for it to deploy and click the Go to resource button.
    • You will need the key and endpoint from the resource you create to connect your application to Content Moderator. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Setting up

Install the client library

After installing Python, you can install the Content Moderator client library with the following command:

pip install --upgrade azure-cognitiveservices-vision-contentmoderator

Create a new Python application

Create a new Python script and open it in your preferred editor or IDE. Then add the following import statements to the top of the file.

import os.path
from pprint import pprint
import time
from io import BytesIO
from random import random
import uuid

from azure.cognitiveservices.vision.contentmoderator import ContentModeratorClient
import azure.cognitiveservices.vision.contentmoderator.models
from msrest.authentication import CognitiveServicesCredentials

Tip

Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.

Next, create variables for your resource's endpoint location and key.

Important

Go to the Azure portal. If the Content Moderator resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. You can find your key and endpoint in the resource's key and endpoint page, under resource management.

CONTENT_MODERATOR_ENDPOINT = "PASTE_YOUR_CONTENT_MODERATOR_ENDPOINT_HERE"
subscription_key = "PASTE_YOUR_CONTENT_MODERATOR_SUBSCRIPTION_KEY_HERE"

Important

Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like Azure Key Vault. See the Azure AI services security article for more information.

Object model

The following classes handle some of the major features of the Content Moderator Python client library.

Name Description
ContentModeratorClient This class is needed for all Content Moderator functionality. You instantiate it with your subscription information, and you use it to produce instances of other classes.
ImageModerationOperations This class provides the functionality for analyzing images for adult content, personal information, or human faces.
TextModerationOperations This class provides the functionality for analyzing text for language, profanity, errors, and personal information.

Code examples

These code snippets show you how to do the following tasks with the Content Moderator client library for Python:

Authenticate the client

Instantiate a client with your endpoint and key. Create a CognitiveServicesCredentials](/python/api/msrest/msrest.authentication.cognitiveservicescredentials object with your key, and use it with your endpoint to create an ContentModeratorClient object.

client = ContentModeratorClient(
    endpoint=CONTENT_MODERATOR_ENDPOINT,
    credentials=CognitiveServicesCredentials(subscription_key)
)

Moderate text

The following code uses a Content Moderator client to analyze a body of text and print the results to the console. First, create a text_files/ folder at the root of your project and add a content_moderator_text_moderation.txt file. Add your own text to this file, or use the following sample text:

Is this a grabage email abcdef@abcd.com, phone: 4255550111, IP: 255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
<offensive word> is the profanity here. Is this information PII? phone 2065550111

Add a reference to the new folder.

TEXT_FOLDER = os.path.join(os.path.dirname(
    os.path.realpath(__file__)), "text_files")

Then, add the following code to your Python script.

# Screen the input text: check for profanity,
# do autocorrect text, and check for personally identifying
# information (PII)
with open(os.path.join(TEXT_FOLDER, 'content_moderator_text_moderation.txt'), "rb") as text_fd:
    screen = client.text_moderation.screen_text(
        text_content_type="text/plain",
        text_content=text_fd,
        language="eng",
        autocorrect=True,
        pii=True
    )
    assert isinstance(screen, Screen)
    pprint(screen.as_dict())

Use a custom terms list

The following code shows how to manage a list of custom terms for text moderation. You can use the ListManagementTermListsOperations class to create a terms list, manage the individual terms, and screen other bodies of text against it.

Get sample text

To use this sample, you must create a text_files/ folder at the root of your project and add a content_moderator_term_list.txt file. This file should contain organic text that will be checked against the list of terms. You can use the following sample text:

This text contains the terms "term1" and "term2".

Add a reference to the folder if you haven't already defined one.

TEXT_FOLDER = os.path.join(os.path.dirname(
    os.path.realpath(__file__)), "text_files")

Create a list

Add the following code to your Python script to create a custom terms list and save its ID value.

#
# Create list
#
print("\nCreating list")
custom_list = client.list_management_term_lists.create(
    content_type="application/json",
    body={
        "name": "Term list name",
        "description": "Term list description",
    }
)
print("List created:")
assert isinstance(custom_list, TermList)
pprint(custom_list.as_dict())
list_id = custom_list.id

Define list details

You can use a list's ID to edit its name and description.

#
# Update list details
#
print("\nUpdating details for list {}".format(list_id))
updated_list = client.list_management_term_lists.update(
    list_id=list_id,
    content_type="application/json",
    body={
        "name": "New name",
        "description": "New description"
    }
)
assert isinstance(updated_list, TermList)
pprint(updated_list.as_dict())

Add a term to the list

The following code adds the terms "term1" and "term2" to the list.

#
# Add terms
#
print("\nAdding terms to list {}".format(list_id))
client.list_management_term.add_term(
    list_id=list_id,
    term="term1",
    language="eng"
)
client.list_management_term.add_term(
    list_id=list_id,
    term="term2",
    language="eng"
)

Get all terms in the list

You can use the list ID to return all of the terms in the list.

#
# Get all terms ids
#
print("\nGetting all term IDs for list {}".format(list_id))
terms = client.list_management_term.get_all_terms(
    list_id=list_id, language="eng")
assert isinstance(terms, Terms)
terms_data = terms.data
assert isinstance(terms_data, TermsData)
pprint(terms_data.as_dict())

Refresh the list index

Whenever you add or remove terms from the list, you must refresh the index before you can use the updated list.

#
# Refresh the index
#
print("\nRefreshing the search index for list {}".format(list_id))
refresh_index = client.list_management_term_lists.refresh_index_method(
    list_id=list_id, language="eng")
assert isinstance(refresh_index, RefreshIndex)
pprint(refresh_index.as_dict())

print("\nWaiting {} minutes to allow the server time to propagate the index changes.".format(
    LATENCY_DELAY))
time.sleep(LATENCY_DELAY * 60)

Screen text against the list

The main functionality of the custom terms list is to compare a body of text against the list and find whether there are any matching terms.

#
# Screen text
#
with open(os.path.join(TEXT_FOLDER, 'content_moderator_term_list.txt'), "rb") as text_fd:
    screen = client.text_moderation.screen_text(
        text_content_type="text/plain",
        text_content=text_fd,
        language="eng",
        autocorrect=False,
        pii=False,
        list_id=list_id
    )
    assert isinstance(screen, Screen)
    pprint(screen.as_dict())

Remove a term from a list

The following code removes the term "term1" from the list.

#
# Remove terms
#
term_to_remove = "term1"
print("\nRemove term {} from list {}".format(term_to_remove, list_id))
client.list_management_term.delete_term(
    list_id=list_id,
    term=term_to_remove,
    language="eng"
)

Remove all terms from a list

Use the following code to clear a list of all its terms.

#
# Delete all terms
#
print("\nDelete all terms in the image list {}".format(list_id))
client.list_management_term.delete_all_terms(
    list_id=list_id, language="eng")

Delete a list

Use the following code to delete a custom terms list.

#
# Delete list
#
print("\nDelete the term list {}".format(list_id))
client.list_management_term_lists.delete(list_id=list_id)

Moderate images

The following code uses a Content Moderator client, along with an ImageModerationOperations object, to analyze images for adult and racy content.

Get sample images

Define a reference to some images to analyze.

IMAGE_LIST = [
    "https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg",
    "https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png"
]

Then add the following code to iterate through your images. The rest of the code in this section will go inside this loop.

for image_url in IMAGE_LIST:
    print("\nEvaluate image {}".format(image_url))

Check for adult/racy content

The following code checks the image at the given URL for adult or racy content and prints results to the console. See the Image moderation concepts guide for information on what these terms mean.

print("\nEvaluate for adult and racy content.")
evaluation = client.image_moderation.evaluate_url_input(
    content_type="application/json",
    cache_image=True,
    data_representation="URL",
    value=image_url
)
assert isinstance(evaluation, Evaluate)
pprint(evaluation.as_dict())

Check for visible text

The following code checks the image for visible text content and prints results to the console.

print("\nDetect and extract text.")
evaluation = client.image_moderation.ocr_url_input(
    language="eng",
    content_type="application/json",
    data_representation="URL",
    value=image_url,
    cache_image=True,
)
assert isinstance(evaluation, OCR)
pprint(evaluation.as_dict())

Check for faces

The following code checks the image for human faces and prints results to the console.

print("\nDetect faces.")
evaluation = client.image_moderation.find_faces_url_input(
    content_type="application/json",
    cache_image=True,
    data_representation="URL",
    value=image_url
)
assert isinstance(evaluation, FoundFaces)
pprint(evaluation.as_dict())

Use a custom image list

The following code shows how to manage a custom list of images for image moderation. This feature is useful if your platform frequently receives instances of the same set of images that you want to screen out. By maintaining a list of these specific images, you can improve performance. The ListManagementImageListsOperations class allows you to create an image list, manage the individual images on the list, and compare other images against it.

Create the following text variables to store the image URLs that you'll use in this scenario.

IMAGE_LIST = {
    "Sports": [
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample6.png",
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample9.png"
    ],
    "Swimsuit": [
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg",
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample3.png",
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
        "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
    ]
}

IMAGES_TO_MATCH = [
    "https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg",
    "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
    "https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png",
    "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
]

Note

This is not the proper list itself, but an informal list of images that will be added in the add images section of the code.

Create an image list

Add the following code to create an image list and save a reference to its ID.

#
# Create list
#
print("Creating list MyList\n")
custom_list = client.list_management_image_lists.create(
    content_type="application/json",
    body={
        "name": "MyList",
        "description": "A sample list",
        "metadata": {
            "key_one": "Acceptable",
            "key_two": "Potentially racy"
        }
    }
)
print("List created:")
assert isinstance(custom_list, ImageList)
pprint(custom_list.as_dict())
list_id = custom_list.id

Add images to a list

The following code adds all of your images to the list.

print("\nAdding images to list {}".format(list_id))
index = {}  # Keep an index url to id for later removal
for label, urls in IMAGE_LIST.items():
    for url in urls:
        image = add_images(list_id, url, label)
        if image:
            index[url] = image.content_id

Define the add_images helper function elsewhere in your script.

#
# Add images
#
def add_images(list_id, image_url, label):
    """Generic add_images from url and label."""
    print("\nAdding image {} to list {} with label {}.".format(
        image_url, list_id, label))
    try:
        added_image = client.list_management_image.add_image_url_input(
            list_id=list_id,
            content_type="application/json",
            data_representation="URL",
            value=image_url,
            label=label
        )
    except APIErrorException as err:
        # sample4 will fail
        print("Unable to add image to list: {}".format(err))
    else:
        assert isinstance(added_image, Image)
        pprint(added_image.as_dict())
        return added_image

Get images in list

The following code prints the names of all the images in your list.

#
# Get all images ids
#
print("\nGetting all image IDs for list {}".format(list_id))
image_ids = client.list_management_image.get_all_image_ids(list_id=list_id)
assert isinstance(image_ids, ImageIds)
pprint(image_ids.as_dict())

Update list details

You can use the list ID to update the name and description of the list.

#
# Update list details
#
print("\nUpdating details for list {}".format(list_id))
updated_list = client.list_management_image_lists.update(
    list_id=list_id,
    content_type="application/json",
    body={
        "name": "Swimsuits and sports"
    }
)
assert isinstance(updated_list, ImageList)
pprint(updated_list.as_dict())

Get list details

Use the following code to print the current details of your list.

#
# Get list details
#
print("\nGetting details for list {}".format(list_id))
list_details = client.list_management_image_lists.get_details(
    list_id=list_id)
assert isinstance(list_details, ImageList)
pprint(list_details.as_dict())

Refresh the list index

After you add or remove images, you must refresh the list index before you can use it to screen other images.

#
# Refresh the index
#
print("\nRefreshing the search index for list {}".format(list_id))
refresh_index = client.list_management_image_lists.refresh_index_method(
    list_id=list_id)
assert isinstance(refresh_index, RefreshIndex)
pprint(refresh_index.as_dict())

print("\nWaiting {} minutes to allow the server time to propagate the index changes.".format(
    LATENCY_DELAY))
time.sleep(LATENCY_DELAY * 60)

Match images against the list

The main function of image lists is to compare new images and see if there are any matches.

#
# Match images against the image list.
#
for image_url in IMAGES_TO_MATCH:
    print("\nMatching image {} against list {}".format(image_url, list_id))
    match_result = client.image_moderation.match_url_input(
        content_type="application/json",
        list_id=list_id,
        data_representation="URL",
        value=image_url,
    )
    assert isinstance(match_result, MatchResponse)
    print("Is match? {}".format(match_result.is_match))
    print("Complete match details:")
    pprint(match_result.as_dict())

Remove an image from the list

The following code removes an item from the list. In this case, it is an image that does not match the list category.

#
# Remove images
#
correction = "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
print("\nRemove image {} from list {}".format(correction, list_id))
client.list_management_image.delete_image(
    list_id=list_id,
    image_id=index[correction]
)

Remove all images from a list

Use the following code to clear out an image list.

#
# Delete all images
#
print("\nDelete all images in the image list {}".format(list_id))
client.list_management_image.delete_all_images(list_id=list_id)

Delete the image list

Use the following code to delete a given image list.

#
# Delete list
#
print("\nDelete the image list {}".format(list_id))
client.list_management_image_lists.delete(list_id=list_id)

Run the application

Run the application with the python command on your quickstart file.

python quickstart-file.py

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Content Moderator Python library to do moderation tasks. Next, learn more about the moderation of images or other media by reading a conceptual guide.

Get started with the Azure Content Moderator REST API.

Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. Use the AI-powered content moderation service to scan text, image, and videos and apply content flags automatically. Build content filtering software into your app to comply with regulations or maintain the intended environment for your users.

Use the Content Moderator REST API to:

  • Moderate text
  • Moderate images

Prerequisites

  • Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Content Moderator resource in the Azure portal to get your key and endpoint. Wait for it to deploy and click the Go to resource button.
    • You will need the key and endpoint from the resource you create to connect your application to Content Moderator. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • PowerShell version 6.0+, or a similar command-line application.

Moderate text

You'll use a command like the following to call the Content Moderator API to analyze a body of text and print the results to the console.

curl -v -X POST "https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessText/Screen?autocorrect=True&PII=True&classify=True&language={string}"
-H "Content-Type: text/plain"
-H "Ocp-Apim-Subscription-Key: {subscription key}"
--data-ascii "Is this a crap email abcdef@abcd.com, phone: 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052"

Copy the command to a text editor and make the following changes:

  1. Assign Ocp-Apim-Subscription-Key to your valid Face subscription key.

    Important

    Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like Azure Key Vault. See the Azure AI services security article for more information.

  2. Change the first part of the query URL to match the endpoint that corresponds to your subscription key.

    Note

    New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of regional endpoints, see Custom subdomain names for Azure AI services.

  3. Optionally change the body of the request to whatever string of text you'd like to analyze.

Once you've made your changes, open a command prompt and enter the new command.

Examine the results

You should see the text moderation results displayed as JSON data in the console window. For example:

{
  "OriginalText": "Is this a <offensive word> email abcdef@abcd.com, phone: 6657789887, IP: 255.255.255.255,\n1 Microsoft Way, Redmond, WA 98052\n",
  "NormalizedText": "Is this a <offensive word> email abide@ abed. com, phone: 6657789887, IP: 255. 255. 255. 255, \n1 Microsoft Way, Redmond, WA 98052",
  "AutoCorrectedText": "Is this a <offensive word> email abide@ abed. com, phone: 6657789887, IP: 255. 255. 255. 255, \n1 Microsoft Way, Redmond, WA 98052",
  "Misrepresentation": null,
  "PII": {
    "Email": [
      {
        "Detected": "abcdef@abcd.com",
        "SubType": "Regular",
        "Text": "abcdef@abcd.com",
        "Index": 21
      }
    ],
    "IPA": [
      {
        "SubType": "IPV4",
        "Text": "255.255.255.255",
        "Index": 61
      }
    ],
    "Phone": [
      {
        "CountryCode": "US",
        "Text": "6657789887",
        "Index": 45
      }
    ],
    "Address": [
      {
        "Text": "1 Microsoft Way, Redmond, WA 98052",
        "Index": 78
      }
    ]
  },
 "Classification": {
    "Category1": 
    {
      "Score": 0.5
    },
    "Category2": 
    {
      "Score": 0.6
    },
    "Category3": 
    {
      "Score": 0.5
    },
    "ReviewRecommended": true
  },
  "Language": "eng",
  "Terms": [
    {
      "Index": 10,
      "OriginalIndex": 10,
      "ListId": 0,
      "Term": "<offensive word>"
    }
  ],
  "Status": {
    "Code": 3000,
    "Description": "OK",
    "Exception": null
  },
  "TrackingId": "1717c837-cfb5-4fc0-9adc-24859bfd7fac"
}

For more information on the text attributes that Content Moderator screens for, see the Text moderation concepts guide.

Moderate images

You'll use a command like the following to call the Content Moderator API to moderate a remote image and print the results to the console.

curl -v -X POST "https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate?CacheImage={boolean}" 
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}" 
--data-ascii "{\"DataRepresentation\":\"URL\", \"Value\":\"https://moderatorsampleimages.blob.core.windows.net/samples/sample.jpg\"}"

Copy the command to a text editor and make the following changes:

  1. Assign Ocp-Apim-Subscription-Key to your valid Face subscription key.
  2. Change the first part of the query URL to match the endpoint that corresponds to your subscription key.
  3. Optionally change the "Value" URL in the request body to whatever remote image you'd like to moderate.

Tip

You can also moderate local images by passing their byte data into the request body. See the reference documentation to learn how to do this.

Once you've made your changes, open a command prompt and enter the new command.

Examine the results

You should see the image moderation results displayed as JSON data in the console window.

{
  "AdultClassificationScore": x.xxx,
  "IsImageAdultClassified": <Bool>,
  "RacyClassificationScore": x.xxx,
  "IsImageRacyClassified": <Bool>,
  "AdvancedInfo": [],
  "Result": false,
  "Status": {
    "Code": 3000,
    "Description": "OK",
    "Exception": null
  },
  "TrackingId": "<Request Tracking Id>"
}

For more information on the image attributes that Content Moderator screens for, see the Image moderation concepts guide.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Content Moderator REST API to do moderation tasks. Next, learn more about the moderation of images or other media by reading a conceptual guide.