I have tried visual studio 2019 model builder for object detection. I have followed a tutorial about stop sign image object detection which will use Azure for training. I then use the model generated to make inference. So far everything is working fine. I can genereate web api too which will use json input { "ImageSource": "path to local image" } and this works too.
Now the problem is I am trying not to use my local cpu to do the inference. I want to use Azure to do the inference. And what I do is look for the experiment generated by model builder. Find the model and deploy the model to the endpoint.
Now when I go to the endpoint generated, there is a test tab there and I suppose that I need to supply the json for the inference. What is the json format needed since I try all of these and all of them not working:
- just the url of the image file
- use { "url" : "url to the image" }
- use { "imageSource": "url to the image" }
- use { "data": [ {"url" : "url to the image"} ] }
- use { "data": [ {"imageSource" : "url to the image"} ] }
And I can't find any documentation about the exact format of the json. And when I call rest api from postman/insomnia it always says time out error.
Below is my deployment log when I try test.
Starting the inference
/azureml-envs/azureml_a5cc75b048d996dfdd3ff5c7e66b85eb/lib/python3.7/site-packages/azureml/contrib/automl/dnn/vision/common/utils.py: since ignore_data_errors is True, file will be ignored.
Got AutoMLVisionDataException as all images in the current batch are invalid. Skipping the batch.
Number of lines written to prediction file: 0
Total scoring time 0.0095 for 0 batches. Batch avg: 0.0000.
Mem stats scoring: {}.
GPU stats scoring: {}{}.
Finished inferencing.
2021-07-08 04:15:27,849 | root | INFO | run() output is HTTP Response
2021-07-08 04:15:27,849 | root | INFO | 200
127.0.0.1 - - [08/Jul/2021:04:15:27 +0000] "POST /score?verbose=true HTTP/1.0" 200 0 "-" "Go-http-client/1.1"