Hi Yutong,
thank you for the reply. I'have submitted a support ticket already, thank you.
It definitely seems possible, but atm there is no official guide how to implement AVA on jetson devices.
Thanks,
Robby.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Hi,
Is there a way to use the Azure Video Analyzer archtitecture on Nvidia Jetson Edge devices (Nano or Xavier NX) with a GPU accelerated inference server?
So far I have only found tutorials using Nvidias Deepstream SDK as a substitute for the Azure Video Analyzer (AVA) module. However, by using the Deepstream SDK I wouldn't be able to take advantage of AVA's pipeline topology architecture for quick deployment of different use cases.
Instead, I would like to deploy my own inference server as a grpc extension module within the AVA pipeline (as indicated here). This would ideally be based on a container that sends inference jobs to the jetson gpu (maybe Nvidias Triton Server container?).
This seems to be the case in Microsofts OpenVino example, so essentially I would like to replicate this for Nvidias edge devices.
Hi Yutong,
thank you for the reply. I'have submitted a support ticket already, thank you.
It definitely seems possible, but atm there is no official guide how to implement AVA on jetson devices.
Thanks,
Robby.
Please see this for an example of how to build an inference server around DeepStream SDK, such that the server can be used with AVA.