Is there a way to use the Azure Video Analyzer archtitecture on Nvidia Jetson Edge devices (Nano or Xavier NX) with a GPU accelerated inference server?
So far I have only found tutorials using Nvidias Deepstream SDK as a substitute for the Azure Video Analyzer (AVA) module. However, by using the Deepstream SDK I wouldn't be able to take advantage of AVA's pipeline topology architecture for quick deployment of different use cases.
Instead, I would like to deploy my own inference server as a grpc extension module within the AVA pipeline (as indicated here). This would ideally be based on a container that sends inference jobs to the jetson gpu (maybe Nvidias Triton Server container?).
This seems to be the case in Microsofts OpenVino example, so essentially I would like to replicate this for Nvidias edge devices.