MMLBaseEstimatorRunConfig Class
Abstract base class for all Estimator run configs.
DEPRECATED. Use the RunConfiguration class.
Initialize the MMLBaseEstimatorRunConfig.
- Inheritance
-
MMLBaseEstimatorRunConfig
Constructor
MMLBaseEstimatorRunConfig(compute_target, vm_size=None, vm_priority=None, entry_script=None, script_params=None, node_count=None, process_count_per_node=None, distributed_backend=None, use_gpu=None, use_docker=None, custom_docker_base_image=None, custom_docker_image=None, image_registry_details=None, user_managed=False, conda_packages=None, pip_packages=None, environment_definition=None, inputs=None, source_directory_data_store=None, shm_size=None)
Parameters
- compute_target
- AbstractComputeTarget or str
The compute target where training will happen. This can either be an object or the string "local".
- vm_size
- str
The VM size of the compute target that will be created for the training.
Supported values: Any Azure VM size.
- vm_priority
- str
The VM priority of the compute target that will be created for the training. If not specified, 'dedicated' is used.
Supported values: 'dedicated' and 'lowpriority'.
This takes effect only when the vm_size
parameter is specified in the input.
- script_params
- dict
A dictionary containing parameters that will be passed as arguments to the entry_script
.
- node_count
- int
The number of nodes in the compute target used for training. Only the
the AmlCompute target is supported for distributed training (node_count
> 1).
- process_count_per_node
- int
When using MPI as an execution backend, the number of processes per node.
- distributed_backend
- str
The communication backend for distributed training.
Supported values: 'mpi' and 'ps'.
'mpi': MPI/Horovod 'ps': parameter server
This parameter is required when any of node_count
, process_count_per_node
, worker_count
, or
parameter_server_count
> 1.
When node_count
== 1 and process_count_per_node
== 1, no backend will be used unless a backend
is explicitly set. Only the azureml.core.compute.AmlCompute target is supported for distributed training.
- use_gpu
- bool
Specifies whether the environment to run the experiment should support GPUs.
If true, a GPU-based default Docker image will be used in the environment. If false, a CPU-based
image will be used. Default Docker images (CPU or GPU) will be used only if the custom_docker_image
parameter is not set. This setting is used only in Docker-enabled compute targets.
- use_docker
- bool
Specifies whether the environment to run the experiment should be Docker-based.
- custom_docker_base_image
- str
The name of the Docker image from which the image to use for training will be built.
DEPRECATED. Use the custom_docker_image
parameter.
If not set, a default CPU-based image will be used as the base image.
- custom_docker_image
- str
The name of the Docker image from which the image to use for training will be built. If not set, a default CPU-based image will be used as the base image.
- image_registry_details
- ContainerRegistry
The details of the Docker image registry.
- user_managed
- bool
Specifies whether Azure ML reuses an existing Python environment. If false, a Python environment is created based on the conda dependencies specification.
- conda_packages
- list
List of strings representing conda packages to be added to the Python environment for the experiment.
- pip_packages
- list
A list of strings representing pip packages to be added to the Python environment for the experiment.
- environment_definition
- Environment
The environment definition for the experiment. It includes
PythonSection, DockerSection, and environment variables. Any environment option not directly
exposed through other parameters to the Estimator construction can be set using this
parameter. If this parameter is specified, it will take precedence over other environment related
parameters like use_gpu
, custom_docker_image
, conda_packages
, or pip_packages
and
errors will be reported on these invalid combinations.
- inputs
- list
A list of DataReference or DatasetConsumptionConfig objects to use as input.
- shm_size
- str
The size of the Docker container's shared memory block. For more information, see Docker run reference. If not set, the default azureml.core.environment._DEFAULT_SHM_SIZE is used.
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for