RunConfiguration Class

Represents configuration for experiment runs targeting different compute targets in Azure Machine Learning.

The RunConfiguration object encapsulates the information necessary to submit a training run in an experiment. Typically, you will not create a RunConfiguration object directly but get one from a method that returns it, such as the submit method of the Experiment class.

RunConfiguration is a base environment configuration that is also used in other types of configuration steps that depend on what kind of run you are triggering. For example, when setting up a PythonScriptStep, you can access the step's RunConfiguration object and configure Conda dependencies or access the environment properties for the run.

For examples of run configurations, see Select and use a compute target to train your model.

Initialize a RunConfiguration with the default settings.

Inheritance
azureml._base_sdk_common.abstract_run_config_element._AbstractRunConfigElement
RunConfiguration

Constructor

RunConfiguration(script=None, arguments=None, framework=None, communicator=None, conda_dependencies=None, _history_enabled=None, _path=None, _name=None, command=None)

Parameters

script
str
default value: None

The relative path to the Python script file. The file path is relative to the source directory passed to submit.

arguments
list[str]
default value: None

Command line arguments for the Python script file.

framework
str
default value: None

The targeted framework used in the run. Supported frameworks are Python, PySpark, TensorFlow, and PyTorch.

communicator
str
default value: None

The communicator used in the run. The supported communicators are None, ParameterServer, OpenMpi, and IntelMpi. Keep in mind that OpenMpi requires a custom image with OpenMpi installed. Use ParameterServer or OpenMpi for AmlCompute clusters. Use IntelMpi for distributed training jobs.

conda_dependencies
CondaDependencies
default value: None

When left at the default value of False, the system creates a Python environment, which includes the packages specified in conda_dependencies. When set true, an existing Python environment can be specified with the python_interpreter setting.

auto_prepare_environment
bool
Required

DEPRECATED. This setting is no longer used.

command
list[str] or str
default value: None

The command to be submitted for the run. The command property can also be used instead of script/arguments. Both command and script/argument properties cannot be used together to submit a run. To submit a script file using the command property - ['python', 'train.py', '–arg1', arg1_val] To run an actual command - ['ls']

_history_enabled
default value: None
_path
default value: None
_name
default value: None

Remarks

We build machine learning systems typically to solve a specific problem. For example, we might be interested in finding the best model that ranks web pages that might be served as search results corresponding to a query. Our search for the best machine learning model may require us try out different algorithms, or consider different parameter settings, etc.

In the Azure Machine Learning SDK, we use the concept of an experiment to capture the notion that different training runs are related by the problem that they're trying to solve. An Experiment then acts as a logical container for these training runs, making it easier to track progress across training runs, compare two training runs directly, etc.

The RunConfiguration encapsulates execution environment settings necessary to submit a training run in an experiment. It captures both the shared structure of training runs that are designed to solve the same machine learning problem, as well as the differences in the configuration parameters (e.g., learning rate, loss function, etc.) that distinguish distinct training runs from each other.

In typical training scenarios, RunConfiguration is used by creating a ScriptRunConfig object that packages together a RunConfiguration object and an execution script for training.

The configuration of RunConfiguration includes:

  • Bundling the experiment source directory including the submitted script.

  • Setting the Command line arguments for the submitted script.

  • Configuring the path for the Python interpreter.

  • Obtain Conda configuration for to manage the application dependencies. The job submission process can use the configuration to provision a temp Conda environment and launch the application within. The temp environments are cached and reused in subsequent runs.

  • Optional usage of Docker and custom base images.

  • Optional choice of submitting the experiment to multiple types of Azure compute.

  • Optional choice of configuring how to materialize inputs and upload outputs.

  • Advanced runtime settings for common runtimes like spark and tensorflow.

The following example shows how to submit a training script on your local machine.


   from azureml.core import ScriptRunConfig, RunConfiguration, Experiment

   # create or load an experiment
   experiment = Experiment(workspace, "MyExperiment")
   # run a trial from the train.py code in your current directory
   config = ScriptRunConfig(source_directory='.', script='train.py',
       run_config=RunConfiguration())
   run = experiment.submit(config)

The following example shows how to submit a training script on your cluster using the command property instead of script and arguments.


   from azureml.core import ScriptRunConfig, Experiment
   # create or load an experiment
   experiment = Experiment(workspace, 'MyExperiment')
   # create or retrieve a compute target
   cluster = workspace.compute_targets['MyCluster']
   # create or retrieve an environment
   env = Environment.get(ws, name='MyEnvironment')
   # configure and submit your training run
   config = ScriptRunConfig(source_directory='.',
                            command=['python', 'train.py', '--arg1', arg1_val],
                            compute_target=cluster,
                            environment=env)
   script_run = experiment.submit(config)

The following sample shows how to run a command on your cluster.


   from azureml.core import ScriptRunConfig, Experiment
   # create or load an experiment
   experiment = Experiment(workspace, 'MyExperiment')
   # create or retrieve a compute target
   cluster = workspace.compute_targets['MyCluster']
   # create or retrieve an environment
   env = Environment.get(ws, name='MyEnvironment')
   # configure and submit your training run
   config = ScriptRunConfig(source_directory='.',
                            command=['ls', '-l'],
                            compute_target=cluster,
                            environment=env)
   script_run = experiment.submit(config)

Variables

environment
Environment

The environment definition. This field configures the Python environment. It can be configured to use an existing Python environment or configure to setup a temp environment for the experiment. The definition is also responsible for setting the required application dependencies.

max_run_duration_seconds
int

The maximum time allowed for the run. The system will attempt to automatically cancel the run if it took longer than this value.

node_count
int

The number of nodes to use for the job.

priority
int

The priority of the job for scheduling policy.

history
HistoryConfiguration

The configuration section used to disable and enable experiment history logging features.

spark
SparkConfiguration

When the platform is set to PySpark, the Spark configuration section is used to set the default SparkConf for the submitted job.

hdi
HdiConfiguration

The HDI configuration section takes effect only when the target is set to an Azure HDI compute. The HDI Configuration is used to set the YARN deployment mode. The default deployment mode is cluster.

docker
DockerConfiguration

The Docker configuration section is used to set variables for the Docker environment.

tensorflow
TensorflowConfiguration

The configuration section used to configure distributed TensorFlow parameters. This parameter takes effect only when the framework is set to TensorFlow, and the communicator to ParameterServer. AmlCompute is the only supported compute for this configuration.

mpi
MpiConfiguration

The configuration section used to configure distributed MPI job parameters. This parameter takes effect only when the framework is set to Python, and the communicator to OpenMpi or IntelMpi. AmlCompute is the only supported compute type for this configuration.

pytorch
PyTorchConfiguration

The configuration section used to configure distributed PyTorch job parameters. This parameter takes effect only when the framework is set to PyTorch, and the communicator to Nccl or Gloo. AmlCompute is the only supported compute type for this configuration.

paralleltask
ParallelTaskConfiguration

The configuration section used to configure distributed paralleltask job parameters. This parameter takes effect only when the framework is set to Python, and the communicator to ParallelTask. AmlCompute is the only supported compute type for this configuration.

data_references
dict[str, DataReferenceConfiguration]

All the data sources are available to the run during execution based on each configuration. For each item of the dictionary, the key is a name given to the data source and the value is a DataReferenceConfiguration.

data
dict[str, Data]

All the data to make available to the run during execution.

datacaches
<xref:buildin.list>[DatacacheConfiguration]

All the data to make datacache available to the run during execution.

output_data
OutputData

All the outputs that should be uploaded and tracked for this run.

source_directory_data_store
str

The backing datastore for the project share.

amlcompute
AmlComputeConfiguration

The details of the compute target to be created during experiment. The configuration only takes effect when the compute target is AmlCompute.

kubernetescompute
KubernetesComputeConfiguration

The details of the compute target to be used during the experiment. The configuration only takes effect when the compute target is KubernetesCompute.

services
dict[str, ApplicationEndpointConfiguration]

Endpoints to interactive with the compute resource. Allowed endpoints are Jupyter, JupyterLab, VS Code, Tensorboard, SSH, and Custom ports.

Methods

delete

Delete a run configuration file.

Raises a UserErrorException if the configuration file is not found.

load

Load a previously saved run configuration file from an on-disk file.

If path points to a file, the RunConfiguration is loaded from that file.

If path points to a directory, which should be a project directory, then the RunConfiguration is loaded from <path>/.azureml/<name> or <path>/aml_config/<name>.

save

Save the RunConfiguration to a file on disk.

A UserErrorException is raised when:

  • The RunConfiguration can't be saved with the name specified.

  • No name parameter was specified.

  • The path parameter is invalid.

If path is of the format <dir_path>/<file_name>, where <dir_path> is a valid directory, then the RunConfiguration is saved at <dir_path>/<file_name>.

If path points to a directory, which should be a project directory, then the RunConfiguration is saved at <path>/.azureml/<name> or <path>/aml_config/<name>.

This method is useful when editing the configuration manually or when sharing the configuration with the CLI.

delete

Delete a run configuration file.

Raises a UserErrorException if the configuration file is not found.

static delete(path, name)

Parameters

path
str
Required

A user selected root directory for run configurations. Typically this is the Git Repository or the Python project root directory. The configuration is deleted from a sub directory named .azureml.

name
str
Required

The configuration file name.

Exceptions

UserErrorException

load

Load a previously saved run configuration file from an on-disk file.

If path points to a file, the RunConfiguration is loaded from that file.

If path points to a directory, which should be a project directory, then the RunConfiguration is loaded from <path>/.azureml/<name> or <path>/aml_config/<name>.

static load(path, name=None)

Parameters

path
str
Required

A user selected root directory for run configurations. Typically this is the Git Repository or the Python project root directory. For backward compatibility, the configuration will also be loaded from .azureml or aml_config sub directory. If the file is not in those directories, the file is loaded from the specified path.

name
str
default value: None

The configuration file name.

Returns

The run configuration object.

Return type

Exceptions

UserErrorException

save

Save the RunConfiguration to a file on disk.

A UserErrorException is raised when:

  • The RunConfiguration can't be saved with the name specified.

  • No name parameter was specified.

  • The path parameter is invalid.

If path is of the format <dir_path>/<file_name>, where <dir_path> is a valid directory, then the RunConfiguration is saved at <dir_path>/<file_name>.

If path points to a directory, which should be a project directory, then the RunConfiguration is saved at <path>/.azureml/<name> or <path>/aml_config/<name>.

This method is useful when editing the configuration manually or when sharing the configuration with the CLI.

save(path=None, name=None, separate_environment_yaml=False)

Parameters

separate_environment_yaml
bool
default value: False

Indicates whether to save the Conda environment configuration. If True, the Conda environment configuration is saved to a YAML file named 'environment.yml'.

path
str
default value: None

A user selected root directory for run configurations. Typically this is the Git Repository or the Python project root directory. The configuration is saved to a sub directory named .azureml.

name
str
default value: None

[Required] The configuration file name.

Return type

Exceptions

UserErrorException

Attributes

auto_prepare_environment

Get the auto_prepare_environment parameter. This is a deprecated and unused setting.

environment_variables

Runtime environment variables.

Returns

Runtime variables

Return type

target

Get compute target where the job is scheduled for execution.

The default target is "local" referring to the local machine. Available cloud compute targets can be found using the function compute_targets.

Returns

The target name

Return type

str