question

Helena-1196 avatar image
0 Votes"
Helena-1196 asked DavidEeckhout-4097 commented

Azure Designer: Cannot create inference because there is no model on this pipeline

I have created a pipeline in Azure Designer and trying to deploy this as a batch prediction.

When I click "Create Inference Pipeline" and "Batch Inference Pipeline" I get this error message:
Cannot create inference because there is no model on this pipeline.

How can I deploy this as a batch prediction?


azure-machine-learningazure-batchazure-machine-learning-inference
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

LuZhang-4441 avatar image
1 Vote"
LuZhang-4441 answered DavidEeckhout-4097 commented

Is it possible to share the screenshot of your designer pipeline or submit a feedback by clicking the smiley face on the top right corner of the studio portal? This will help us better debug/troubleshoot the issue.

· 5
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

I can submit to the feedback.

But basically I have a model where I have 2 webservices input and 2 webservices output.
In between I only have Execute Python Scripts (no model attached).

I want to add 2 csv files and generate 2 outputs for those csv files.

In Azure Machine Learning Classic (V1) this was possible, but now I can't deploy as a webservice if there is no model attached to it.

0 Votes 0 ·

I am facing the same issue . do we have any method to achieve this task ?
I Know it was possible in AML v1

0 Votes 0 ·

If you want to run batch inference, you don't need to convert to batch-inference pipeline. You can publish it directly by following this document: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-retrain-designer. If you need to run a real-time inference pipeline, the workaround is to clone the pipeline first and then submit the run. After that, you should be able to deploy a real-time endpoint.

0 Votes 0 ·
Show more comments
PoosariSenthilkumar-7596 avatar image
0 Votes"
PoosariSenthilkumar-7596 answered PoosariSenthilkumar-7596 edited

I am also having this issue. any solution for this. we have exising R code which is using xboost R models as a saved file. we used execute R code task, the experiments executing fine and returning the results as expected, this experiment does not have train model task in the experiment. how to publish this as a webserive?

Appreciate any help in advance.

5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.