question

IvanBudiono-3286 avatar image
0 Votes"
IvanBudiono-3286 asked romungi-MSFT commented

How to call azure endpoint rest api from model generated by visual studio 2019 ml builder?

I have tried visual studio 2019 model builder for object detection. I have followed a tutorial about stop sign image object detection which will use Azure for training. I then use the model generated to make inference. So far everything is working fine. I can genereate web api too which will use json input { "ImageSource": "path to local image" } and this works too.

Now the problem is I am trying not to use my local cpu to do the inference. I want to use Azure to do the inference. And what I do is look for the experiment generated by model builder. Find the model and deploy the model to the endpoint.

Now when I go to the endpoint generated, there is a test tab there and I suppose that I need to supply the json for the inference. What is the json format needed since I try all of these and all of them not working:
- just the url of the image file
- use { "url" : "url to the image" }
- use { "imageSource": "url to the image" }
- use { "data": [ {"url" : "url to the image"} ] }
- use { "data": [ {"imageSource" : "url to the image"} ] }

And I can't find any documentation about the exact format of the json. And when I call rest api from postman/insomnia it always says time out error.

Below is my deployment log when I try test.

Starting the inference
/azureml-envs/azureml_a5cc75b048d996dfdd3ff5c7e66b85eb/lib/python3.7/site-packages/azureml/contrib/automl/dnn/vision/common/utils.py: since ignore_data_errors is True, file will be ignored.
Got AutoMLVisionDataException as all images in the current batch are invalid. Skipping the batch.
Number of lines written to prediction file: 0
Total scoring time 0.0095 for 0 batches. Batch avg: 0.0000.
Mem stats scoring: {}.
GPU stats scoring: {}{}.
Finished inferencing.
2021-07-08 04:15:27,849 | root | INFO | run() output is HTTP Response
2021-07-08 04:15:27,849 | root | INFO | 200
127.0.0.1 - - [08/Jul/2021:04:15:27 +0000] "POST /score?verbose=true HTTP/1.0" 200 0 "-" "Go-http-client/1.1"



azure-machine-learning
· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

@IvanBudiono-3286 I think you used the steps from Azure ML.Net documentation as mentioned here. I see the steps do mention to train the model on Azure and predict a local image.
The steps do not include how the model should be called if a user deploys the model to Azure ML endpoint though.

Since you have used the trained model from experiment and deployed. It should theoretically predict if called with the correct JSON format, usually a swagger document can be generated but I am not sure about this sample ML model builder model.

Looking at the input class of the project it looks like the JSON includes Label and imageSource. So, based on documentation on how to consume the model of ML.net you could try to add the label field and try again.


0 Votes 0 ·

0 Answers