khubaibRaza-8970 avatar image
0 Votes"
khubaibRaza-8970 asked romungi-MSFT commented

Custom Argument pass to Docker Container Azure ML inference

Hello Team,

I'm trying to pass the arguments to Azure ML docker. I have created an environment like this.

 env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='curated_env/conda_dependencies.yml' )

Am I passing the arguments correct?

 DOCKER_ARGUMENTS = ["--shm-size","32G"]  # increase shared memory
 env.docker.arguments = DOCKER_ARGUMENTS

The main goal of this project is to deploy a model on the AKS inference cluster. I have successfully deployed the model. When I try to get predictions from the model I got this error

It is possible that data loaders workers are out of shared memory. Please try to raise your shared memory limit

How can I do that if that's not the correct way to pass arguments?

5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

romungi-MSFT avatar image
0 Votes"
romungi-MSFT answered romungi-MSFT commented

@khubaibRaza-8970 To pass the argument for increasing the default "shm_size" you would have to use the DockerConfiguration object. Here is a sample to achieve this:

 from azureml.core import Environment
 from azureml.core import ScriptRunConfig
 from azureml.core.runconfig import DockerConfiguration
 # Specify VM and Python environment:
 my_env = Environment.from_conda_specification(name='my-test-env', file_path=PATH_TO_YAML_FILE)
 my_env.docker.base_image = ''
 docker_config = DockerConfiguration(use_docker=True,shm_size='32g')
 # Finally, use the environment in the ScriptRunConfig:
 src = ScriptRunConfig(source_directory=DEPLOY_CONTAINER_FOLDER_PATH,

If an answer is helpful, please click on 130616-image.png or upvote 130671-image.png which might help other community members reading this thread.

· 2
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Thank you for your response @romungi-MSFT
Let me clarify for you, I'm not training the model. I'm just trying to deploy a pre-trained model to the inference cluster. I don't think ScriptRunConfig will help me anyway.

How can I pass the DockerConfiguration object to InferenceConfig or is there any other way?

here is my configuration it might help you to understand. What I'm trying to acheive.

 inference_config = InferenceConfig(entry_script="", environment=env , source_directory='.' ) 
 gpu_aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
 service = Model.deploy(workspace=ws, 

0 Votes 0 ·

You can use the InferenceConfig() extra_docker_file_steps parameter to run steps while setting up the image. I think this is basically a dockerfile to run the setup.

For the variables approach I think the steps mentioned in this SO post should work but I am not sure if this particular variable in the scoring script will help setup the image with increased size.

0 Votes 0 ·