MotionDetectionProcessor Class
Motion detection processor allows for motion detection on the video stream. It generates motion events whenever motion is present on the video.
All required parameters must be populated in order to send to Azure.
- Inheritance
-
azure.media.videoanalyzeredge._generated.models._models_py3.ProcessorNodeBaseMotionDetectionProcessor
Constructor
MotionDetectionProcessor(*, name: str, inputs: List[azure.media.videoanalyzeredge._generated.models._models_py3.NodeInput], sensitivity: Optional[Union[str, azure.media.videoanalyzeredge._generated.models._azure_video_analyzerfor_edge_enums.MotionDetectionSensitivity]] = None, output_motion_region: Optional[bool] = None, event_aggregation_window: Optional[str] = None, **kwargs)
Variables
- type
- str
Required. Type discriminator for the derived types.Constant filled by server.
- name
- str
Required. Node name. Must be unique within the topology.
- inputs
- list[<xref:azure.media.videoanalyzer.edge.models.NodeInput>]
Required. An array of upstream node references within the topology to be used as inputs for this node.
- sensitivity
- str or <xref:azure.media.videoanalyzer.edge.models.MotionDetectionSensitivity>
Motion detection sensitivity: low, medium, high. Possible values include: "low", "medium", "high".
- output_motion_region
- bool
Indicates whether the processor should detect and output the regions within the video frame where motion was detected. Default is true.
- event_aggregation_window
- str
Time window duration on which events are aggregated before being emitted. Value must be specified in ISO8601 duration format (i.e. "PT2S" equals 2 seconds). Use 0 seconds for no aggregation. Default is 1 second.
Feedback
Submit and view feedback for