question

Hidde-5466 avatar image
0 Votes"
Hidde-5466 asked Hidde-5466 answered

Automated machine learning model deployment issue

So I'm having an issue with setting up an endpoint for a machine learning model which was trained using Azure AutoML. When I try to test the deployed model, I get an error saying that the service is temporarily unavailable. After looking online, I found that this might happen because of an error in the run() function in the entry script.

When I try to test the entry script on a notebook in Azure ML studio, on a fresh compute instance, there are two problems:
First I get the error: AttributeError: 'MSIAuthentication' object has no attribute 'get_token'
Which is solved by running: pip install azureml-core

Then I get the error: ModuleNotFoundError: No module named 'azureml.automl.runtime'
Which I try to solve using: pip install azureml-automl-runtime
But this throws a lot of incompatibility errors during the installation. When I then try to run the entry script I get an error with the message: "Failed while applying learned transformations."

So I setup a new virtual environment on my local machine in which I only installed azure-automl-runtime. Using that setup the entry script works perfectly fine. So I created a custom environment in Azure ML studio using the conda file of that local virtual environment. Unfortunatly I still get the error "service temporarily unavailable" when trying to test the endpoint.

I have a feeling the default Azure ML containers are incompatible with azureml-automl-runtime, since installing this on a ML studio notebook also throws a lot of errors.

I feel like there should be an elegant way to deploy an AutoML model, am I doing something wrong here?



Update: I found out I didn't change the environment for the endpoint, so that is why I was getting the same error probably. When using the custom environment I got errors from gunicorn, so I also added that package to the environment. Now I get the following error:

       File "/var/azureml-server/entry.py", line 1, in <module>
     import create_app
   File "/var/azureml-server/create_app.py", line 4, in <module>
     from routes_common import main
   File "/var/azureml-server/routes_common.py", line 39, in <module>
     from azure.ml.api.exceptions.ClientSideException import ClientSideException
 ModuleNotFoundError: No module named 'azure.ml'


So what do I install to fix this? Is there a list somewhere of required packages for an ML model endpoint?



azure-machine-learning
· 2
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

@Hidde-5466 Thanks for the question. Can you please share the entry script that you are trying.

0 Votes 0 ·
  """
        
  Input script for Azure ML model.
        
  """
  from azureml.core.model import Model
  from azureml.core import Workspace
  import json
  import joblib
  import numpy as np
  import pandas as pd
        
  def init():
      model_name = 'test'
        
      ws = Workspace(subscription_id="XX",
                     resource_group="XX",
                     workspace_name="XX")
      global model
      model = Model(ws, model_name)
      model_file_name = 'model.pkl'
      model.download(target_dir='.', exist_ok=True)
      model = joblib.load(model_file_name)
        
        
  def run(data):
      DataDict = json.loads(data)
      input_data = pd.DataFrame.from_dict(DataDict)
      prediction = model.predict_proba(input_data)
      return prediction

This is the code I'm using


0 Votes 0 ·
Hidde-5466 avatar image
1 Vote"
Hidde-5466 answered

I managed to fix the issue with the environment by just adding everything that would throw an error. Then I found out the return value has to be a json/dict object, which if not done throws the exact same 'service temporarily unavailable' error.

But my issue with the confusing curated environments and azureml-automl-runtime in ML studio notebooks remain. Maybe this is worth looking into @ramr-msft .

5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

ramr-msft avatar image
0 Votes"
ramr-msft answered ramr-msft commented

@Hidde-5466 Thanks, Can you try this notebook for deployment and if that works for you (it should), compare with your code?
https://github.com/CESARDELATORRE/Easy-AutoML-MLOps/blob/master/notebooks/5-automl-model-service-deployment-and-inference/automl-model-service-deployment-and-inference-safe-driver-classifier.ipynb

You’ll first need to train and register the model with this previous notebook using a pipeline:
https://github.com/CESARDELATORRE/Easy-AutoML-MLOps/blob/master/notebooks/4-automlstep-pipeline-run/automlstep-pipeline-run-safe-driver-classifier.ipynb

You can also use the notebook with a simple AutoML remote run, but you might need to change the name of the model when registering it in the Workspace since it’s a different name to what the deployment notebook is using:
https://github.com/CESARDELATORRE/Easy-AutoML-MLOps/blob/master/notebooks/3-automl-remote-compute-run/automl-remote-compute-run-safe-driver-classifier.ipynb

· 2
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Thanks for your reply! The first notebook is quite helpful. I have one issue though, which curated environment should I use?
The first notebook has this piece of code:

  # Use curated environment from AML named "AzureML-Tutorial"
  automl_curated_environment = Environment.get(workspace=ws, name="AzureML-AutoML")

This piece confuses me. It's not using the environment named 'AzureML-Tutorial', it's using 'AzureML-AutoML', but if I look at the curated environments in my workspace there is no 'AzureML-AutoML'. Which, looking at the name, is the one I would need.

0 Votes 0 ·

@Hidde-5466 Thanks for the details. We have forwarded to the product team to check on this.

0 Votes 0 ·