I'm trying to pass the arguments to Azure ML docker. I have created an environment like this.
env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='curated_env/conda_dependencies.yml' )
Am I passing the arguments correct?
DOCKER_ARGUMENTS = ["--shm-size","32G"] # increase shared memory env.docker.arguments = DOCKER_ARGUMENTS
The main goal of this project is to deploy a model on the AKS inference cluster. I have successfully deployed the model. When I try to get predictions from the model I got this error
It is possible that data loaders workers are out of shared memory. Please try to raise your shared memory limit
How can I do that if that's not the correct way to pass arguments?