question

NabeelRaza-8986 avatar image
0 Votes"
NabeelRaza-8986 asked NabeelRaza-8986 commented

Deploy GPU enbaled in a studio notebook locally

Hello there, team. I'm attempting to deploy a model locally in my ML studio notebook, which has GPU compute power. But when I try to run my model, I get an error.

Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from azure ml

I'm using this

 env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='curated_env/new_cuda_dep.yml' ) #environment  using
 inference_config = InferenceConfig(entry_script="score.py", environment=env , source_directory='.' ,)








azure-machine-learningazure-machine-learning-inference
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

ramr-msft avatar image
0 Votes"
ramr-msft answered NabeelRaza-8986 commented

@NabeelRaza-8986 Thanks for the question. Since the AzureML SDK for local deployment uses the existing docker client we'll have to make sure that this client picks up the nvidia container runtime to make the GPUs available to it. Usually we would use the gpus --all flag when creating a new docker container.

Make sure you install the nvidia container runtime.


· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

I have successfully deployed the model and with working cuda, but now I'm confused about how can I pass custom arguments to the docker container let's say increase the size of shared memory and increase the number of workers?

I have found a solution that can pass custom arguments and shared memory size using DockerConfiguration . How can I pass this to InferenceConfig or Environment ?

Do I need to do something like this

 docker_conf = DockerConfiguration(use_docker=True, shm_size='32g')  # Use a docker container
 env.docker = docker_conf # passing docker config to env


but it's not working.

what would be the way to pass this docker_config to environment or InferenceConfig ? please don't recommend using ScriptRunConfig because we're not training the model. We're only intrested to deploy the pre-trained model.

0 Votes 0 ·