What is Azure Content Moderator?
The Content Moderator Review tool is now deprecated and will be retired on 12/31/2021.
Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically, as well as the Review tool, an online moderator environment for a team of human reviewers.
You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.
This documentation contains the following article types:
- Quickstarts are getting-started instructions to guide you through making requests to the service.
- How-to guides contain instructions for using the service in more specific or customized ways.
- Concepts provide in-depth explanations of the service functionality and features.
- Tutorials are longer guides that show you how to use the service as a component in broader business solutions.
Where it's used
The following are a few scenarios in which a software developer or team would require a content moderation service:
- Online marketplaces that moderate product catalogs and other user-generated content.
- Gaming companies that moderate user-generated game artifacts and chat rooms.
- Social messaging platforms that moderate images, text, and videos added by their users.
- Enterprise media companies that implement centralized moderation for their content.
- K-12 education solution providers filtering out content that is inappropriate for students and educators.
You cannot use Content Moderator to detect illegal child exploitation images. However, qualified organizations can use the PhotoDNA Cloud Service to screen for this type of content.
What it includes
The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK. It also includes the Review tool, which allows human reviewers to aid the service and improve or fine-tune its moderation function.
The Content Moderator service includes Moderation APIs, which check content for material that is potentially inappropriate or objectionable.
The following table describes the different types of moderation APIs.
|Text moderation||Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data.|
|Custom term lists||Scans text against a custom list of terms along with the built-in terms. Use custom lists to block or allow content according to your own content policies.|
|Image moderation||Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability, and detects faces.|
|Custom image lists||Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.|
|Video moderation||Scans videos for adult or racy content and returns time markers for said content.|
The Review APIs let you integrate your moderation pipeline with human reviewers. Use the Jobs, Reviews, and Workflow operations to create and automate human-in-the-loop workflows with the Review tool (below).
The Workflow API is not yet available in the .NET SDK but can be used with the REST endpoint.
The Content Moderator service also includes the web-based Review tool, which hosts the content reviews for human moderators to process. The human input doesn't train the service, but the combined work of the service and human review teams allows developers to strike the right balance between efficiency and accuracy. The Review tool also provides a user-friendly front end for several Content Moderator resources.
Data privacy and security
As with all of the Cognitive Services, developers using the Content Moderator service should be aware of Microsoft's policies on customer data. See the Cognitive Services page on the Microsoft Trust Center to learn more.