videoanalyzeredge Package
Classes
CertificateSource |
Base class for certificate sources. You probably want to use the sub-classes and not this class directly. Known sub-classes are: PemCertificateList. All required parameters must be populated in order to send to Azure. |
CognitiveServicesVisionProcessor |
A processor that allows the pipeline topology to send video frames to a Cognitive Services Vision extension. Inference results are relayed to downstream nodes. All required parameters must be populated in order to send to Azure. |
CredentialsBase |
Base class for credential objects. You probably want to use the sub-classes and not this class directly. Known sub-classes are: HttpHeaderCredentials, SymmetricKeyCredentials, UsernamePasswordCredentials. All required parameters must be populated in order to send to Azure. |
DiscoveredOnvifDevice |
The discovered properties of the ONVIF device that are returned during the discovery. |
DiscoveredOnvifDeviceCollection |
A list of ONVIF devices that were discovered in the same subnet as the IoT Edge device. |
EndpointBase |
Base class for endpoints. You probably want to use the sub-classes and not this class directly. Known sub-classes are: TlsEndpoint, UnsecuredEndpoint. All required parameters must be populated in order to send to Azure. |
ExtensionProcessorBase |
Base class for pipeline extension processors. Pipeline extensions allow for custom media analysis and processing to be plugged into the Video Analyzer pipeline. You probably want to use the sub-classes and not this class directly. Known sub-classes are: GrpcExtension, HttpExtension. All required parameters must be populated in order to send to Azure. |
FileSink |
File sink allows for video and audio content to be recorded on the file system on the edge device. All required parameters must be populated in order to send to Azure. |
GrpcExtension |
GRPC extension processor allows pipeline extension plugins to be connected to the pipeline through over a gRPC channel. Extension plugins must act as an gRPC server. Please see https://aka.ms/ava-extension-grpc for details. All required parameters must be populated in order to send to Azure. |
GrpcExtensionDataTransfer |
Describes how media is transferred to the extension plugin. All required parameters must be populated in order to send to Azure. |
H264Configuration |
Class representing the H264 Configuration. |
HttpExtension |
HTTP extension processor allows pipeline extension plugins to be connected to the pipeline through over the HTTP protocol. Extension plugins must act as an HTTP server. Please see https://aka.ms/ava-extension-http for details. All required parameters must be populated in order to send to Azure. |
HttpHeaderCredentials |
HTTP header credentials. All required parameters must be populated in order to send to Azure. |
ImageFormatBmp |
BMP image encoding. All required parameters must be populated in order to send to Azure. |
ImageFormatJpeg |
JPEG image encoding. All required parameters must be populated in order to send to Azure. |
ImageFormatPng |
PNG image encoding. All required parameters must be populated in order to send to Azure. |
ImageFormatProperties |
Base class for image formatting properties. You probably want to use the sub-classes and not this class directly. Known sub-classes are: ImageFormatBmp, ImageFormatJpeg, ImageFormatPng, ImageFormatRaw. All required parameters must be populated in order to send to Azure. |
ImageFormatRaw |
Raw image formatting. All required parameters must be populated in order to send to Azure. |
ImageProperties |
Image transformations and formatting options to be applied to the video frame(s). |
ImageScale |
Image scaling mode. |
IotHubDeviceConnection |
Information that enables communication between the IoT Hub and the IoT device - allowing this edge module to act as a transparent gateway between the two. All required parameters must be populated in order to send to Azure. |
IotHubMessageSink |
IoT Hub Message sink allows for pipeline messages to published into the IoT Edge Hub. Published messages can then be delivered to the cloud and other modules via routes declared in the IoT Edge deployment manifest. All required parameters must be populated in order to send to Azure. |
IotHubMessageSource |
IoT Hub Message source allows for the pipeline to consume messages from the IoT Edge Hub. Messages can be routed from other IoT modules via routes declared in the IoT Edge deployment manifest. All required parameters must be populated in order to send to Azure. |
LineCrossingProcessor |
Line crossing processor allows for the detection of tracked objects moving across one or more predefined lines. It must be downstream of an object tracker of downstream on an AI extension node that generates sequenceId for objects which are tracked across different frames of the video. Inference events are generated every time objects crosses from one side of the line to another. All required parameters must be populated in order to send to Azure. |
LivePipeline |
Live Pipeline represents an unique instance of a pipeline topology which is used for real-time content ingestion and analysis. All required parameters must be populated in order to send to Azure. |
LivePipelineActivateRequest |
Activates an existing live pipeline. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
LivePipelineCollection |
A collection of live pipelines. |
LivePipelineDeactivateRequest |
Deactivates an existing live pipeline. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
LivePipelineDeleteRequest |
Deletes an existing live pipeline. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
LivePipelineGetRequest |
Retrieves an existing live pipeline. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
LivePipelineListRequest |
List all existing live pipelines. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
LivePipelineProperties |
Live pipeline properties. |
LivePipelineSetRequest |
Creates a new live pipeline or updates an existing one. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
LivePipelineSetRequestBody |
Live Pipeline represents an unique instance of a pipeline topology which is used for real-time content ingestion and analysis. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
MPEG4Configuration |
Class representing the MPEG4 Configuration. |
MediaProfile |
Class representing the ONVIF MediaProfiles. |
MediaUri |
Object representing the URI that will be used to request for media streaming. |
MethodRequest |
Base class for direct method calls. You probably want to use the sub-classes and not this class directly. Known sub-classes are: LivePipelineSetRequestBody, MethodRequestEmptyBodyBase, PipelineTopologySetRequestBody, RemoteDeviceAdapterSetRequestBody, LivePipelineListRequest, LivePipelineSetRequest, OnvifDeviceDiscoverRequest, OnvifDeviceGetRequest, PipelineTopologyListRequest, PipelineTopologySetRequest, RemoteDeviceAdapterListRequest, RemoteDeviceAdapterSetRequest. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
MethodRequestEmptyBodyBase |
MethodRequestEmptyBodyBase. You probably want to use the sub-classes and not this class directly. Known sub-classes are: LivePipelineActivateRequest, LivePipelineDeactivateRequest, LivePipelineDeleteRequest, LivePipelineGetRequest, PipelineTopologyDeleteRequest, PipelineTopologyGetRequest, RemoteDeviceAdapterDeleteRequest, RemoteDeviceAdapterGetRequest. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
MotionDetectionProcessor |
Motion detection processor allows for motion detection on the video stream. It generates motion events whenever motion is present on the video. All required parameters must be populated in order to send to Azure. |
NamedLineBase |
Base class for named lines. You probably want to use the sub-classes and not this class directly. Known sub-classes are: NamedLineString. All required parameters must be populated in order to send to Azure. |
NamedLineString |
Describes a line configuration. All required parameters must be populated in order to send to Azure. |
NamedPolygonBase |
Describes the named polygon. You probably want to use the sub-classes and not this class directly. Known sub-classes are: NamedPolygonString. All required parameters must be populated in order to send to Azure. |
NamedPolygonString |
Describes a closed polygon configuration. All required parameters must be populated in order to send to Azure. |
NodeInput |
Describes an input signal to be used on a pipeline node. All required parameters must be populated in order to send to Azure. |
ObjectTrackingProcessor |
Object tracker processor allows for continuous tracking of one of more objects over a finite sequence of video frames. It must be used downstream of an object detector extension node, thus allowing for the extension to be configured to to perform inferences on sparse frames through the use of the 'maximumSamplesPerSecond' sampling property. The object tracker node will then track the detected objects over the frames in which the detector is not invoked resulting on a smother tracking of detected objects across the continuum of video frames. The tracker will stop tracking objects which are not subsequently detected by the upstream detector on the subsequent detections. All required parameters must be populated in order to send to Azure. |
OnvifDevice |
The ONVIF device properties. |
OnvifDeviceDiscoverRequest |
Lists all the discoverable ONVIF devices on the same subnet as the Edge Module. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
OnvifDeviceGetRequest |
Retrieves properties and media profiles of an ONVIF device. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
OnvifDns |
The ONVIF device DNS properties. |
OnvifHostName |
The ONVIF device DNS properties. |
OnvifSystemDateTime |
The ONVIF device DNS properties. |
OutputSelector |
Allows for the selection of particular streams from another node. |
ParameterDeclaration |
Single topology parameter declaration. Declared parameters can and must be referenced throughout the topology and can optionally have default values to be used when they are not defined in the pipeline instances. All required parameters must be populated in order to send to Azure. |
ParameterDefinition |
Defines the parameter value of an specific pipeline topology parameter. See pipeline topology parameters for more information. All required parameters must be populated in order to send to Azure. |
PemCertificateList |
A list of PEM formatted certificates. All required parameters must be populated in order to send to Azure. |
PipelineTopology |
Pipeline topology describes the processing steps to be applied when processing media for a particular outcome. The topology should be defined according to the scenario to be achieved and can be reused across many pipeline instances which share the same processing characteristics. For instance, a pipeline topology which acquires data from a RTSP camera, process it with an specific AI model and stored the data on the cloud can be reused across many different cameras, as long as the same processing should be applied across all the cameras. Individual instance properties can be defined through the use of user-defined parameters, which allow for a topology to be parameterized, thus allowing individual pipelines to refer to different values, such as individual cameras RTSP endpoints and credentials. Overall a topology is composed of the following:
|
PipelineTopologyCollection |
A collection of pipeline topologies. |
PipelineTopologyDeleteRequest |
Deletes an existing pipeline topology. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
PipelineTopologyGetRequest |
Retrieves an existing pipeline topology. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
PipelineTopologyListRequest |
List all existing pipeline topologies. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
PipelineTopologyProperties |
Pipeline topology properties. |
PipelineTopologySetRequest |
Creates a new pipeline topology or updates an existing one. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
PipelineTopologySetRequestBody |
Pipeline topology describes the processing steps to be applied when processing media for a particular outcome. The topology should be defined according to the scenario to be achieved and can be reused across many pipeline instances which share the same processing characteristics. For instance, a pipeline topology which acquires data from a RTSP camera, process it with an specific AI model and stored the data on the cloud can be reused across many different cameras, as long as the same processing should be applied across all the cameras. Individual instance properties can be defined through the use of user-defined parameters, which allow for a topology to be parameterized, thus allowing individual pipelines to refer to different values, such as individual cameras RTSP endpoints and credentials. Overall a topology is composed of the following:
|
ProcessorNodeBase |
Base class for topology processor nodes. You probably want to use the sub-classes and not this class directly. Known sub-classes are: CognitiveServicesVisionProcessor, ExtensionProcessorBase, LineCrossingProcessor, MotionDetectionProcessor, ObjectTrackingProcessor, SignalGateProcessor. All required parameters must be populated in order to send to Azure. |
RateControl |
Class representing the video's rate control. |
RemoteDeviceAdapter |
The Video Analyzer edge module can act as a transparent gateway for video, enabling IoT devices to send video to the cloud from behind a firewall. A remote device adapter should be created for each such IoT device. Communication between the cloud and IoT device would then flow via the Video Analyzer edge module. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterCollection |
A list of remote device adapters. |
RemoteDeviceAdapterDeleteRequest |
Deletes an existing remote device adapter. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterGetRequest |
Retrieves an existing remote device adapter. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterListRequest |
List all existing remote device adapters. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterProperties |
Remote device adapter properties. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterSetRequest |
Creates a new remote device adapter or updates an existing one. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterSetRequestBody |
The Video Analyzer edge module can act as a transparent gateway for video, enabling IoT devices to send video to the cloud from behind a firewall. A remote device adapter should be created for each such IoT device. Communication between the cloud and IoT device would then flow via the Video Analyzer edge module. Variables are only populated by the server, and will be ignored when sending a request. All required parameters must be populated in order to send to Azure. |
RemoteDeviceAdapterTarget |
Properties of the remote device adapter target. All required parameters must be populated in order to send to Azure. |
RtspSource |
RTSP source allows for media from an RTSP camera or generic RTSP server to be ingested into a live pipeline. All required parameters must be populated in order to send to Azure. |
SamplingOptions |
Defines how often media is submitted to the extension plugin. |
SignalGateProcessor |
A signal gate determines when to block (gate) incoming media, and when to allow it through. It gathers input events over the activationEvaluationWindow, and determines whether to open or close the gate. See https://aka.ms/ava-signalgate for more information. All required parameters must be populated in order to send to Azure. |
SinkNodeBase |
Base class for topology sink nodes. You probably want to use the sub-classes and not this class directly. Known sub-classes are: FileSink, IotHubMessageSink, VideoSink. All required parameters must be populated in order to send to Azure. |
SourceNodeBase |
Base class for topology source nodes. You probably want to use the sub-classes and not this class directly. Known sub-classes are: IotHubMessageSource, RtspSource. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisCustomOperation |
Defines a Spatial Analysis custom operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisOperationBase |
Base class for Azure Cognitive Services Spatial Analysis operations. You probably want to use the sub-classes and not this class directly. Known sub-classes are: SpatialAnalysisCustomOperation, SpatialAnalysisTypedOperationBase. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisOperationEventBase |
Defines the Azure Cognitive Services Spatial Analysis operation eventing configuration. |
SpatialAnalysisPersonCountEvent |
Defines a Spatial Analysis person count operation eventing configuration. |
SpatialAnalysisPersonCountOperation |
Defines a Spatial Analysis person count operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonCountZoneEvents |
SpatialAnalysisPersonCountZoneEvents. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonDistanceEvent |
Defines a Spatial Analysis person distance operation eventing configuration. |
SpatialAnalysisPersonDistanceOperation |
Defines a Spatial Analysis person distance operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonDistanceZoneEvents |
SpatialAnalysisPersonDistanceZoneEvents. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonLineCrossingEvent |
Defines a Spatial Analysis person line crossing operation eventing configuration. |
SpatialAnalysisPersonLineCrossingLineEvents |
SpatialAnalysisPersonLineCrossingLineEvents. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonLineCrossingOperation |
Defines a Spatial Analysis person line crossing operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonZoneCrossingEvent |
Defines a Spatial Analysis person crossing zone operation eventing configuration. |
SpatialAnalysisPersonZoneCrossingOperation |
Defines a Spatial Analysis person zone crossing operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisPersonZoneCrossingZoneEvents |
SpatialAnalysisPersonZoneCrossingZoneEvents. All required parameters must be populated in order to send to Azure. |
SpatialAnalysisTypedOperationBase |
Base class for Azure Cognitive Services Spatial Analysis typed operations. You probably want to use the sub-classes and not this class directly. Known sub-classes are: SpatialAnalysisPersonCountOperation, SpatialAnalysisPersonDistanceOperation, SpatialAnalysisPersonLineCrossingOperation, SpatialAnalysisPersonZoneCrossingOperation. All required parameters must be populated in order to send to Azure. |
SymmetricKeyCredentials |
Symmetric key credential. All required parameters must be populated in order to send to Azure. |
SystemData |
Read-only system metadata associated with a resource. |
TlsEndpoint |
TLS endpoint describes an endpoint that the pipeline can connect to over TLS transport (data is encrypted in transit). All required parameters must be populated in order to send to Azure. |
TlsValidationOptions |
Options for controlling the validation of TLS endpoints. |
UnsecuredEndpoint |
Unsecured endpoint describes an endpoint that the pipeline can connect to over clear transport (no encryption in transit). All required parameters must be populated in order to send to Azure. |
UsernamePasswordCredentials |
Username and password credentials. All required parameters must be populated in order to send to Azure. |
VideoCreationProperties |
Optional video properties to be used in case a new video resource needs to be created on the service. These will not take effect if the video already exists. |
VideoEncoderConfiguration |
Class representing the MPEG4 Configuration. |
VideoPublishingOptions |
Options for changing video publishing behavior on the video sink and output video. |
VideoResolution |
The Video resolution. |
VideoSink |
Video sink allows for video and audio to be recorded to the Video Analyzer service. The recorded video can be played from anywhere and further managed from the cloud. Due to security reasons, a given Video Analyzer edge module instance can only record content to new video entries, or existing video entries previously recorded by the same module. Any attempt to record content to an existing video which has not been created by the same module instance will result in failure to record. All required parameters must be populated in order to send to Azure. |
Enums
GrpcExtensionDataTransferMode |
Data transfer mode: embedded or sharedMemory. |
H264Profile |
The H264 Profile |
ImageFormatRawPixelFormat |
Pixel format to be applied to the raw image. |
ImageScaleMode |
Describes the image scaling mode to be applied. Default mode is 'pad'. |
LivePipelineState |
Current pipeline state (read-only). |
MPEG4Profile |
The MPEG4 Profile |
MotionDetectionSensitivity |
Motion detection sensitivity: low, medium, high. |
ObjectTrackingAccuracy |
Object tracker accuracy: low, medium, high. Higher accuracy leads to higher CPU consumption in average. |
OnvifSystemDateTimeType |
An enum value determining whether the date time was configured using NTP or manual. |
OutputSelectorOperator |
The operator to compare properties by. |
OutputSelectorProperty |
The property of the data stream to be used as the selection criteria. |
ParameterType |
Type of the parameter. |
RtspTransport |
Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages. |
SpatialAnalysisOperationFocus |
The operation focus type. |
SpatialAnalysisPersonCountEventTrigger |
The event trigger type. |
SpatialAnalysisPersonDistanceEventTrigger |
The event trigger type. |
SpatialAnalysisPersonZoneCrossingEventType |
The event type. |
VideoEncoding |
The video codec used by the Media Profile. |
Azure SDK for Python
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for