Hello,
I recently deployed an Azure Machine Learning Model (Designer). I have followed this tutorial (but for my model). The inference pipeline looks like this:

This endpoint was then deployed with the following configurations:
Compute Type: Azure Container Instances
CPU : 2
Memory: 2 Gb
From my understanding, when a model-endpoint is deployed initially, there is a test run (which tries to consume the endpoint) that is automatically run by Azure ML, and this has always worked for me (I have tried AKS too, and ACI with lower compute power). Attached, please find the logs.
109955-initial-test-endpoint.txt
After the test run, I can either test it from the interface itself, or I can execute some Python code (which Azure ML provides) that can consume it for me. In both cases, I get a 500-Bad Gateway error (checked through Deployment Logs):
109962-endpoint-test.txt
This is what the Python code outputs:
I have monitored the status of the endpoint while I try to consume it, and it is always healthy.
I hope I have provided enough details. If not, please let me know what else is needed to understand the issue here. Any help is appreciated.
