Estimator class

Definition

Represents a generic estimator to train data using any supplied framework.

This class is designed for use with machine learning frameworks that do not already have an Azure Machine Learning pre-configured estimator. Pre-configured estimators exist for Chainer, PyTorch, TensorFlow, and SKLearn.

The Estimator class wraps run configuration information to help simplify the tasks of specifying how a script is executed. It supports single-node as well as multi-node execution. Running the estimator produces a model in the output directory specified in your training script.

Estimator(source_directory, *, compute_target=None, vm_size=None, vm_priority=None, entry_script=None, script_params=None, node_count=1, process_count_per_node=1, distributed_backend=None, distributed_training=None, use_gpu=False, use_docker=True, custom_docker_image=None, image_registry_details=None, user_managed=False, conda_packages=None, pip_packages=None, conda_dependencies_file_path=None, pip_requirements_file_path=None, conda_dependencies_file=None, pip_requirements_file=None, environment_variables=None, environment_definition=None, inputs=None, source_directory_data_store=None, shm_size=None, resume_from=None, max_run_duration_seconds=None, _disable_validation=False)
Inheritance
azureml.train.estimator._mml_base_estimator.MMLBaseEstimator
Estimator

Parameters

source_directory
str

A local directory containing experiment configuration and code files needed for a training job.

compute_target
AbstractComputeTarget or str

The compute target where training will happen. This can either be an object or the string "local".

vm_size
str

The VM size of the compute target that will be created for the training. Supported values: Any Azure VM size.

vm_priority
str

The VM priority of the compute target that will be created for the training. If not specified, 'dedicated' is used.

Supported values: 'dedicated' and 'lowpriority'.

This takes effect only when the vm_size parameter is specified in the input.

entry_script
str

The relative path to the file used to start training.

script_params
dict

A dictionary of command-line arguments to pass to the training script specified in entry_script.

node_count
int

The number of nodes in the compute target used for training. If greater than 1, an MPI distributed job will be run.

process_count_per_node
int

The number of processes (or "workers") to run on each node. If greater than 1, an MPI distributed job will be run. Only the AmlCompute target is supported for distributed jobs.

distributed_backend
str

The communication backend for distributed training.

DEPRECATED. Use the distributed_training parameter.

Supported values: 'mpi'. 'mpi' represents MPI/Horovod.

This parameter is required when node_count or process_count_per_node > 1.

When node_count == 1 and process_count_per_node == 1, no backend will be used unless the backend is explicitly set. Only the AmlCompute target is supported for distributed training.

distributed_training
Mpi

Parameters for running a distributed training job.

For running a distributed job with MPI backend, use Mpi object to specify process_count_per_node.

use_gpu
bool

Indicates whether the environment to run the experiment should support GPUs. If true, a GPU-based default Docker image will be used in the environment. If false, a CPU-based image will be used. Default Docker images (CPU or GPU) will be used only if the custom_docker_image parameter is not set. This setting is used only in Docker enabled compute targets.

use_docker
bool

Specifies whether the environment to run the experiment should be Docker-based.

custom_docker_image
str

The name of the Docker image from which the image to use for training will be built. If not set, a default CPU-based image will be used as the base image. Only specify images available in public docker repositories (Docker Hub). To use an image from a private docker repository, use the constructor's environment_definition parameter instead.

image_registry_details
ContainerRegistry

The details of the Docker image registry.

user_managed
bool

Specifies whether Azure ML reuses an existing Python environment. If false, a Python environment is created based on the conda dependencies specification.

conda_packages
list

A list of strings representing conda packages to be added to the Python environment for the experiment.

pip_packages
list

A list of strings representing pip packages to be added to the Python environment for the experiment.

conda_dependencies_file_path
str

The relative path to the conda dependencies yaml file.

DEPRECATED. Use the conda_dependencies_file paramenter.

Specify either conda_dependencies_file_path or conda_dependencies_file. If both are specified, conda_dependencies_file is used.

pip_requirements_file_path
str

The relative path to the pip requirements text file.

DEPRECATED. Use the pip_requirements_file parameter.

This parameter can be specified in combination with the pip_packages parameter. Specify either pip_requirements_file_path or pip_requirements_file. If both are specified, pip_requirements_file is used.

conda_dependencies_file
str

The relative path to the conda dependencies yaml file.

pip_requirements_file
str

The relative path to the pip requirements text file. This parameter can be specified in combination with the pip_packages parameter.

environment_variables
dict

A dictionary of environment variables names and values. These environment variables are set on the process where user script is being executed.

environment_definition
EnvironmentDefinition

The environment definition for the experiment. It includes PythonSection, DockerSection, and environment variables. Any environment option not directly exposed through other parameters to the Estimator construction can be set using this parameter. If this parameter is specified, it will take precedence over other environment-related parameters like use_gpu, custom_docker_image, conda_packages, or pip_packages. Errors will be reported on invalid combinations.

inputs
list

A list of DataReference or DatasetConsumptionConfig objects to use as input.

source_directory_data_store
Datastore

The backing data store for the project share.

shm_size
str

The size of the Docker container's shared memory block. If not set, the default azureml.core.environment._DEFAULT_SHM_SIZE is used. For more information, see Docker run reference.

resume_from
DataPath

The data path containing the checkpoint or model files from which to resume the experiment.

max_run_duration_seconds
int

The maximum allowed time for the run. Azure ML will attempt to automatically cancel the run if it take longer than this value.

Remarks

The example below shows how to create an estimator for training using the Microsoft Cognitive Toolkit (CNTK). CNTK doesn't have a corresponding custom estimator defined in Azure ML SDK but can still be used for training with the Estimator class.


   from azureml.train.estimator import Estimator

   script_params = {
       '--num_epochs': 20,
       '--data_dir': ds_data.as_mount(),
       '--output_dir': './outputs'
   }

   estimator = Estimator(source_directory=project_folder,
                         compute_target=compute_target,
                         entry_script='cntk_distr_mnist.py',
                         script_params=script_params,
                         node_count=2,
                         process_count_per_node=1,
                         distributed_backend='mpi',
                         pip_packages=['cntk-gpu==2.6'],
                         custom_docker_image='microsoft/mmlspark:gpu-0.12',
                         use_gpu=True)

Full sample is available from https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/distributed-cntk-with-custom-docker/distributed-cntk-with-custom-docker.ipynb

For more information about training with Estimator, see the tutorial Train models with Azure Machine Learning using estimator.

For information about Docker containers used in Azure ML training, see https://github.com/Azure/AzureML-Containers.