Looking for some metric, for example latency in milliseconds, that can represent the inference time of a pre-trained model using the Azure Percept DK.
For example, I have been training a network or creating a network in TensorFlow and would like to test it on the Azure Percept DK. I would like to see any performance improvements or latency metrics that represent the changes I am making to my network to track inference performance overtime.
Is there any way to get this information from the Percept Devkit? I see telemetry information but this doesn't seem like the performance numbers I am seeking.
If easier to make an example, is this feature available for any of the pre-trained models supplied with the devkit I can test on to see the model inference performance?
Are there any guides or precedence for collecting network performance on this device? Not looking for the precision/recall/mAP percents here, more network latency times.
Thanks.