Analyze live video streams with multiple AI models using AI composition

edge icon
Alternatively, check out topics under Create video applications in the service.


Note

Azure Video Analyzer has been retired and is no longer available.

Azure Video Analyzer for Media is not affected by this retirement. It is now rebranded to Azure Video Indexer. Click here to read more.

Certain customer scenarios require that video be analyzed with multiple AI models. Such models can be either augmenting each other or working independently in parallel on the same video stream or a combination of such augmented and independently parallel models can be acting on the same video stream to derive actionable insights.

Azure Video Analyzer supports such scenarios via a feature called AI Composition. This guide shows you how you can apply multiple models in an augmented fashion on the same video stream. It uses a Tiny(Light) YOLO and a regular YOLO model in parallel, to detect an object of interest. The Tiny YOLO model is computationally lighter but less accurate than the YOLO model and is called first. If the object detected passes a specific confidence threshold, then the sequentially staged regular YOLO model is not invoked, thus utilizing the underlying resources efficiently.

After completing the steps in this guide, you'll be able to run a simulated live video stream through a pipeline with AI composability and extend it to your specific scenarios. The following diagram graphically represents that pipeline.

AI composition overview

Prerequisites

  • An Azure account that has an active subscription. Create an account for free if you don't already have one.

    Note

    You will need an Azure subscription with permissions for creating service principals (owner role provides this). If you do not have the right permissions, please reach out to your account administrator to grant you the right permissions.

  • Visual Studio Code on your development machine. Make sure you have the Azure IoT Tools extension.

  • Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol (AMQP) over port 5671 for outbound traffic. This setup enables Azure IoT Tools to communicate with Azure IoT Hub.

  • Complete Quickstart: Analyze a live video feed from a (simulated) IP camera using your own gRPC model. Do not skip this step as this is a strict requirement for the how to guide.

Tip

You might be prompted to install Docker while you're installing the Azure IoT Tools extension. Feel free to ignore the prompt.

If you run into issues with Azure resources that get created, please view our troubleshooting guide to resolve some commonly encountered issues.

Review the video sample

Since you have already completed the quickstart specified in the prerequisite section, you will have an edge device already created. This edge device will have the following input folder - /home/localedgeuser/samples/input- that includes certain video files. Log into the IoT Edge device, change to the directory to: /home/localedgeuser/samples/input/ and run the following command to get the input file we will be using for this how to guide.

wget https://avamedia.blob.core.windows.net/public/co-final.mkv

Additionally, if you like, on your machine that has VLC media player, select Ctrl+N and then paste a link to sample video (.mkv) to start playback. You see the footage of cars on a freeway.

Create and deploy the pipeline

Similar to the steps in the quickstart that you completed in the prerequisites, you can follow the steps here but with minor adjustments.

  1. Follow the guidelines in Create and deploy the pipeline section of the quickstart you just finished. Be sure to make the following adjustments as you continue with the steps. These steps help to ensure that the correct body for the direct method calls are used.

    Edit the operations.json file:

    • Change the link to the pipeline topology: "pipelineTopologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/ai-composition/topology.json"

    • Under livePipelineSet,

      1. ensure : "topologyName" : "AIComposition" and
      2. Change the rtspUrl parameter value to "rtsp://rtspsim:554/media/co-final.mkv".
    • Under pipelineTopologyDelete, edit the name: "name" : "AIComposition"

  2. Follow the guidelines in Generate and deploy the IoT Edge deployment manifest section but use the following deployment manifest instead - src/edge/deployment.composite.template.json

  3. Follow the guidelines in Run the sample program section.

  4. For result details, see the interpret the results section. In addition to the analytics events on the hub and the diagnostic events, the topology that you have used also creates a relevant video clip on the cloud that is triggered by the AI signal-based activation of the signal gate. This clip is also accompanied with operational events on the hub for downstream workflows to take. You can examine and play the video clip by logging into the Azure portal.

Clean up

If you're not going to continue to use this application, delete the resources you created in this quickstart.

Next steps

Learn more about diagnostic messages.