Run class

Definition

Defines the base class for all Azure Machine Learning experiment runs.

A run represents a single trial of an experiment. Runs are used to monitor the asynchronous execution of a trial, log metrics and store output of the trial, and to analyze results and access artifacts generated by the trial.

Run objects are created when you submit a script to train a model in many different scenarios in Azure Machine Learning, including HyperDrive runs, Pipeline runs, and AutoML runs. A Run object is also created when you submit(config, tags=None, **kwargs) or start_logging(*args, **kwargs) with the Experiment class.

To get started with experiments and runs, see

Run(experiment, run_id, outputs=None, **kwargs)
Inheritance
azureml._logging.chained_identity.ChainedIdentity
azureml._run_impl.run_base._RunBase
Run

Parameters

experiment
Experiment

The containing experiment.

run_id
str

The ID for the run.

outputs
str

The outputs to be tracked.

_run_dto
azureml._restclient.models.run_dto.RunDto

Internal use only.

kwargs
dict

A dictionary of additional configuration parameters.

Remarks

A run represents a single trial of an experiment. A Run object is used to monitor the asynchronous execution of a trial, log metrics and store output of the trial, and to analyze results and access artifacts generated by the trial.

Run is used inside of your experimentation code to log metrics and artifacts to the Run History service.

Run is used outside of your experiments to monitor progress and to query and analyze the metrics and results that were generated.

The functionality of Run includes:

  • Storing and retrieving metrics and data

  • Uploading and downloading files

  • Using tags as well as the child hierarchy for easy lookup of past runs

  • Registering stored model files as a model that can be operationalized

  • Storing, modifying, and retrieving properties of a run

  • Loading the current run from a remote environment with the get_context(allow_offline=True, used_for_context_manager=False, **kwargs) method

  • Efficiently snapshotting a file or directory for reproducibility

This class works with the Experiment in these scenarios:

To submit a run, create a configuration object that describes how the experiment is run. Here are examples of the different configuration objects you can use:

The following metrics can be added to a run while training an experiment.

  • Scalar

    • Log a numerical or string value to the run with the given name using log(name, value, description=''). Logging a metric to a run causes that metric to be stored in the run record in the experiment. You can log the same metric multiple times within a run, the result being considered a vector of that metric.

    • Example: run.log("accuracy", 0.95)

  • List

  • Row

    • Using log_row(name, description=None, **kwargs) creates a metric with multiple columns as described in kwargs. Each named parameter generates a column with the value specified. log_row can be called once to log an arbitrary tuple, or multiple times in a loop to generate a complete table.

    • Example: run.log_row("Y over X", x=1, y=0.4)

  • Table

  • Image

Methods

add_properties(properties)

Add immutable properties to the run.

Tags and Properties (both dict[str, str]) differ in their mutability. Properties are immutable, so they create a permanent record for auditing purposes. Tags are mutable.

add_type_provider(runtype, run_factory)

Extensibility hook for custom Run types stored in Run History.

cancel()

Mark the run as canceled.

If there is an associated job with a set cancel_uri field, terminate that job as well.

child_run(name=None, run_id=None, outputs=None)

Create a child run.

clean()

Remove the files corresponding to the current run on the target specified in the run configuration.

complete(_set_status=True)

Wait for task queue to be processed.

Then run is marked as completed. This is typically used in interactive notebook scenarios.

create_children(count=None, tag_key=None, tag_values=None)

Create one or many child runs.

download_file(name, output_file_path=None)

Download an associated file from storage.

download_files(prefix=None, output_directory=None, output_paths=None, batch_size=100, append_prefix=True)

Download files from a given storage prefix (folder name) or the entire container if prefix is unspecified.

fail(error_details=None, error_code=None, _set_status=True)

Mark the run as failed.

Optionally set the Error property of the run with a message or exception passed to error_details.

flush(timeout_seconds=300)

Wait for task queue to be processed.

get_all_logs(destination=None)

Download all logs for the run to a directory.

get_children(recursive=False, tags=None, properties=None, type=None, status=None, _rehydrate_runs=True)

Get all children for the current run selected by specified filters.

get_context(allow_offline=True, used_for_context_manager=False, **kwargs)

Return current service context.

Use this method to retrieve the current service context for logging metrics and uploading files. If allow_offline is True (the default), actions against the Run object will be printed to standard out.

get_details()

Get the definition, status information, current log files, and other details of the run.

get_details_with_logs()

Return run status including log file content.

get_environment()

Get the environment definition that was used by this run.

get_file_names()

List the files that are stored in association with the run.

get_metrics(name=None, recursive=False, run_type=None, populate=False)

Retrieve the metrics logged to the run.

If recursive is True (False by default), then fetch metrics for runs in the given run's subtree.

get_properties()

Fetch the latest properties of the run from the service.

get_secret(name)

Get the secret value from the context of a run.

Get the secret value for the name provided. The secret name references a value stored in Azure Key Vault associated with your workspace. For an example of working with secrets, see Use secrets in training runs.

get_secrets(secrets)

Get the secret values for a given list of secret names.

Get a dictionary of found and not found secrets for the list of names provided. Each secret name references a value stored in Azure Key Vault associated with your workspace. For an example of working with secrets, see Use secrets in training runs.

get_snapshot_id()

Get the latest snapshot ID.

get_status()

Fetch the latest status of the run.

Common values returned include "Running", "Completed", and "Failed".

get_submitted_run(**kwargs)

DEPRECATED. Use get_context(allow_offline=True, used_for_context_manager=False, **kwargs).

Get the submitted run for this experiment.

get_tags()

Fetch the latest set of mutable tags on the run from the service.

list(experiment, type=None, tags=None, properties=None, status=None, include_children=False, _rehydrate_runs=True)

Get a list of runs in an experiment specified by optional filters.

list_by_compute(compute, type=None, tags=None, properties=None, status=None)

Get a list of runs in a compute specified by optional filters.

log(name, value, description='')

Log a metric value to the run with the given name.

log_accuracy_table(name, value, description='')

Log an accuracy table to the artifact store.

The accuracy table metric is a multi-use, non-scalar metric that can be used to produce multiple types of line charts that vary continuously over the space of predicted probabilities. Examples of these charts are ROC, precision-recall, and lift curves.

The calculation of the accuracy table is similar to the calculation of an ROC curve. An ROC curve stores true positive rates and false positive rates at many different probability thresholds. The accuracy table stores the raw number of true positives, false positives, true negatives, and false negatives at many probability thresholds.

There are two methods used for selecting thresholds: "probability" and "percentile." They differ in how they sample from the space of predicted probabilities.

Probability thresholds are uniformly spaced thresholds between 0 and 1. If NUM_POINTS is 5 the probability thresholds would be [0.0, 0.25, 0.5, 0.75, 1.0].

Percentile thresholds are spaced according to the distribution of predicted probabilities. Each threshold corresponds to the percentile of the data at a probability threshold. For example, if NUM_POINTS is 5, then the first threshold would be at the 0th percentile, the second at the 25th percentile, the third at the 50th, and so on.

The probability tables and percentile tables are both 3D lists where the first dimension represents the class label, the second dimension represents the sample at one threshold (scales with NUM_POINTS), and the third dimension always has 4 values: TP, FP, TN, FN, and always in that order.

The confusion values (TP, FP, TN, FN) are computed with the one vs. rest strategy. See the following link for more details: https://en.wikipedia.org/wiki/Multiclass_classification

N = # of samples in validation dataset (200 in example) M = # thresholds = # samples taken from the probability space (5 in example) C = # classes in full dataset (3 in example)

Some invariants of the accuracy table:

  • TP + FP + TN + FN = N for all thresholds for all classes
  • TP + FN is the same at all thresholds for any class
  • TN + FP is the same at all thresholds for any class
  • Probability tables and percentile tables have shape [C, M, 4]

Note: M can be any value and controls the resolution of the charts This is independent of the dataset, is defined when calculating metrics, and trades off storage space, computation time, and resolution.

Class labels should be strings, confusion values should be integers, and thresholds should be floats.

log_confusion_matrix(name, value, description='')

Log a confusion matrix to the artifact store.

This logs a wrapper around the sklearn confusion matrix. The metric data contains the class labels and a 2D list for the matrix itself. See the following link for more details on how the metric is computed: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html

log_image(name, path=None, plot=None, description='')

Log an image metric to the run record.

log_list(name, value, description='')

Log a list of metric values to the run with the given name.

log_predictions(name, value, description='')

Log predictions to the artifact store.

This logs a metric score that can be used to compare the distributions of true target values to the distribution of predicted values for a regression task.

The predictions are binned and standard deviations are calculated for error bars on a line chart.

log_residuals(name, value, description='')

Log residuals to the artifact store.

This logs the data needed to display a histogram of residuals for a regression task. The residuals are predicted - actual.

There should be one more edge than the number of counts. Please see the numpy histogram documentation for examples of using counts and edges to represent a histogram. https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html

log_row(name, description=None, **kwargs)

Log a row metric to the run with the given name.

log_table(name, value, description='')

Log a table metric to the run with the given name.

register_model(model_name, model_path=None, tags=None, properties=None, model_framework=None, model_framework_version=None, description=None, datasets=None, sample_input_dataset=None, sample_output_dataset=None, resource_configuration=None, **kwargs)

Register a model for operationalization.

remove_tags(tags)

Delete the list of mutable tags on this run.

restore_snapshot(snapshot_id=None, path=None)

Restore a snapshot as a ZIP file. Returns the path to the ZIP.

set_tags(tags)

Add or modify a set of tags on the run. Tags not passed in the dictionary are left untouched.

You can also add simple string tags. When these tags appear in the tag dictionary as keys, they have a value of None. For more information, see Tag and find runs.

start()

Mark the run as started.

This is typically used in advanced scenarios when the run has been created by another actor.

submit_child(config, tags=None, **kwargs)

Submit an experiment and return the active child run.

tag(key, value=None)

Tag the run with a string key and optional string value.

take_snapshot(file_or_folder_path)

Save a snapshot of the input file or folder.

upload_file(name, path_or_stream)

Upload a file to the run record.

upload_files(names, paths, return_artifacts=False, timeout_seconds=None)

Upload files to the run record.

upload_folder(name, path)

Upload the specified folder to the given prefix name.

wait_for_completion(show_output=False, wait_post_processing=False, raise_on_error=True)

Wait for the completion of this run. Returns the status object after the wait.

add_properties(properties)

Add immutable properties to the run.

Tags and Properties (both dict[str, str]) differ in their mutability. Properties are immutable, so they create a permanent record for auditing purposes. Tags are mutable.

add_properties(properties)

Parameters

properties
dict

The hidden properties stored in the run object.

add_type_provider(runtype, run_factory)

Extensibility hook for custom Run types stored in Run History.

add_type_provider(runtype, run_factory)

Parameters

runtype
str

The value of Run.type for which the factory will be invoked. Examples include 'hyperdrive' or 'azureml.scriptrun', but can be extended with custom types.

run_factory
function

A function with signature (Experiment, RunDto) -> Run to be invoked when listing runs.

cancel()

Mark the run as canceled.

If there is an associated job with a set cancel_uri field, terminate that job as well.

cancel()

child_run(name=None, run_id=None, outputs=None)

Create a child run.

child_run(name=None, run_id=None, outputs=None)

Parameters

name
str

An optional name for the child run, typically specified for a "part".

default value: None
run_id
str

An optional run ID for the child, otherwise it is auto-generated. Typically this parameter is not set.

default value: None
outputs
str

Optional outputs directory to track for the child.

default value: None

Returns

The child run.

Return type

Run

Remarks

This is used to isolate part of a run into a subsection. This can be done for identifiable "parts" of a run that are interesting to separate, or to capture independent metrics across an interation of a subprocess.

If an output directory is set for the child run, the contents of that directory will be uploaded to the child run record when the child is completed.

clean()

Remove the files corresponding to the current run on the target specified in the run configuration.

clean()

Returns

A list of files deleted.

Return type

complete(_set_status=True)

Wait for task queue to be processed.

Then run is marked as completed. This is typically used in interactive notebook scenarios.

complete(_set_status=True)

Parameters

_set_status
bool

Indicates whether to send the status event for tracking.

default value: True

create_children(count=None, tag_key=None, tag_values=None)

Create one or many child runs.

create_children(count=None, tag_key=None, tag_values=None)

Parameters

count
int

An optional number of children to create.

default value: None
tag_key
str

An optional key to populate the Tags entry in all created children.

default value: None
tag_Values

An optional list of values that will map onto Tags[tag_key] for the list of runs created.

default value: None

Returns

The list of child runs.

Return type

Remarks

Either parameter count OR parameters tag_key AND tag_values must be specified.

download_file(name, output_file_path=None)

Download an associated file from storage.

download_file(name, output_file_path=None)

Parameters

name
str

The name of the artifact to be downloaded.

output_file_path
str

The local path where to store the artifact.

download_files(prefix=None, output_directory=None, output_paths=None, batch_size=100, append_prefix=True)

Download files from a given storage prefix (folder name) or the entire container if prefix is unspecified.

download_files(prefix=None, output_directory=None, output_paths=None, batch_size=100, append_prefix=True)

Parameters

prefix
str

The filepath prefix within the container from which to download all artifacts.

output_directory
str

An optional directory that all artifact paths use as a prefix.

output_paths
[str]

Optional filepaths in which to store the downloaded artifacts. Should be unique and match length of paths.

batch_size
int

The number of files to download per batch. The default is 100 files.

append_prefix
bool

optional flag to append the specified prefix from the final output file path if False then the the prefix is removed from the output file path

fail(error_details=None, error_code=None, _set_status=True)

Mark the run as failed.

Optionally set the Error property of the run with a message or exception passed to error_details.

fail(error_details=None, error_code=None, _set_status=True)

Parameters

error_details
str or BaseException

Optional details of the error.

default value: None
error_code
str

Optional error code of the error for the error classification.

default value: None
_set_status
bool

Indicates whether to send the status event for tracking.

default value: True

flush(timeout_seconds=300)

Wait for task queue to be processed.

flush(timeout_seconds=300)

Parameters

timeout_seconds
int

How long to wait (in seconds) for task queue to be processed.

default value: 300

get_all_logs(destination=None)

Download all logs for the run to a directory.

get_all_logs(destination=None)

Parameters

destination
str

The destination path to store logs. If unspecified, a directory named as the run ID is created in the project directory.

default value: None

Returns

A list of names of logs downloaded.

Return type

get_children(recursive=False, tags=None, properties=None, type=None, status=None, _rehydrate_runs=True)

Get all children for the current run selected by specified filters.

get_children(recursive=False, tags=None, properties=None, type=None, status=None, _rehydrate_runs=True)

Parameters

recursive
bool

Indicates whether to recurse through all descendants.

default value: False
tags
str or dict

If specified, returns runs matching specified "tag" or {"tag": "value"}.

default value: None
properties
str or dict

If specified, returns runs matching specified "property" or {"property": "value"}.

default value: None
type
str

If specified, returns runs matching this type.

default value: None
status
str

If specified, returns runs with status specified "status".

default value: None
_rehydrate_runs
bool

Indicates whether to instantiate a run of the original type or the base Run.

default value: True

Returns

A list of :classazureml.core.run.Run.

Return type

get_context(allow_offline=True, used_for_context_manager=False, **kwargs)

Return current service context.

Use this method to retrieve the current service context for logging metrics and uploading files. If allow_offline is True (the default), actions against the Run object will be printed to standard out.

get_context(allow_offline=True, used_for_context_manager=False, **kwargs)

Parameters

cls

Indicates class method.

allow_offline
bool

Allow the service context to fall back to offline mode so that the training script can be tested locally without submitting a job with the SDK. True by default.

default value: True
kwargs
dict

A dictionary of additional parameters.

default value: False

Returns

The submitted run.

Return type

Run

Remarks

This function is commonly used to retrieve the authenticated Run object inside of a script to be submitted for execution via experiment.submit(). This run object is both an authenticated context to communicate with Azure Machine Learning services and a conceptual container within which metrics, files (artifacts), and models are contained.


   run = Run.get_context() # allow_offline=True by default, so can be run locally as well
   ...
   run.log("Accuracy", 0.98)
   run.log_row("Performance", epoch=e, error=err)

get_details()

Get the definition, status information, current log files, and other details of the run.

get_details()

Returns

Return the details for the run

Return type

Remarks

The returned dictionary contains the following key-value pairs:

  • runId: ID of this run.

  • target

  • status: The run's current status. Same value as that returned from get_status().

  • startTimeUtc: UTC time of when this run was started, in ISO8601.

  • endTimeUtc: UTC time of when this run was finished (either Completed or Failed), in ISO8601.

    This key does not exist if the run is still in progress.

  • properties: Immutable key-value pairs associated with the run. Default properties include the run's snapshot ID and information about the git repository from which the run was created (if any). Additional properties can be added to a run using add_properties(properties).

  • datasets: Datasets associated with the run.

  • logFiles


   run = experiment.start_logging()

   details = run.get_details()
   # details = {
   #     'runId': '5c24aa28-6e4a-4572-96a0-fb522d26fe2d',
   #     'target': 'sdk',
   #     'status': 'Running',
   #     'startTimeUtc': '2019-01-01T13:08:01.713777Z',
   #     'properties': {
   #         'azureml.git.repository_uri': 'https://example.com/my/git/repo',
   #         'azureml.git.branch': 'master',
   #         'azureml.git.commit': '7dc972657c2168927a02c3bc2b161e0f370365d7',
   #         'azureml.git.dirty': 'True',
   #         'mlflow.source.git.repoURL': 'https://example.com/my/git/repo',
   #         'mlflow.source.git.branch': 'master',
   #         'mlflow.source.git.commit': '7dc972657c2168927a02c3bc2b161e0f370365d7',
   #         'ContentSnapshotId': 'b4689489-ce2f-4db5-b6d7-6ad11e77079c'
   #     },
   #     'datasets': {
   #         'inputs': [{
   #             'dataset': {'id': 'cdebf245-701d-4a68-8055-41f9cf44f298'},
   #             'consumptionDetails': {
   #                 'type': 'RunInput',
   #                 'inputName': 'training-data',
   #                 'mechanism': 'Mount',
   #                 'pathOnCompute': '/mnt/datasets/train'
   #             }
   #         }]
   #     },
   #     'logFiles': {}
   # }

get_details_with_logs()

Return run status including log file content.

get_details_with_logs()

Returns

Returns the status for the run with log file contents.

Return type

get_environment()

Get the environment definition that was used by this run.

get_environment()

Returns

Return the environment object.

Return type

get_file_names()

List the files that are stored in association with the run.

get_file_names()

Returns

The list of paths for existing artifacts

Return type

get_metrics(name=None, recursive=False, run_type=None, populate=False)

Retrieve the metrics logged to the run.

If recursive is True (False by default), then fetch metrics for runs in the given run's subtree.

get_metrics(name=None, recursive=False, run_type=None, populate=False)

Parameters

name
str

The name of the metric.

default value: None
recursive
bool

Indicates whether to recurse through all descendants.

default value: False
run_type
str
default value: None
populate
bool

Indicates whether to fetch the contents of external data linked to the metric.

default value: False

Returns

A dictionary containing the users metrics.

Return type

Remarks


   run = experiment.start_logging() # run id: 123
   run.log("A", 1)
   with run.child_run() as child: # run id: 456
       child.log("A", 2)

   metrics = run.get_metrics()
   # metrics = { 'A': 1 }

   metrics = run.get_metrics(recursive=True)
   # metrics = { '123': { 'A': 1 }, '456': { 'A': 2 } } note key is runId

get_properties()

Fetch the latest properties of the run from the service.

get_properties()

Returns

The properties of the run.

Return type

Remarks

Properties include immutable system-generated information such as duration, date of execution, user, etc. For more information, see Tag and find runs.

get_secret(name)

Get the secret value from the context of a run.

Get the secret value for the name provided. The secret name references a value stored in Azure Key Vault associated with your workspace. For an example of working with secrets, see Use secrets in training runs.

get_secret(name)

Parameters

name
str

The secret name for which to return a secret.

Returns

The secret value.

Return type

str

get_secrets(secrets)

Get the secret values for a given list of secret names.

Get a dictionary of found and not found secrets for the list of names provided. Each secret name references a value stored in Azure Key Vault associated with your workspace. For an example of working with secrets, see Use secrets in training runs.

get_secrets(secrets)

Parameters

secrets
list[str]

A list of secret names for which to return secret values.

Returns

Returns a dictionary of found and not found secrets.

Return type

dict[(str: str)]

get_snapshot_id()

Get the latest snapshot ID.

get_snapshot_id()

Returns

The most recent snapshot ID.

Return type

str

get_status()

Fetch the latest status of the run.

Common values returned include "Running", "Completed", and "Failed".

get_status()

Returns

The latest status.

Return type

str

Remarks

  • NotStarted - This is a temporary state client-side Run objects are in before cloud submission.

  • Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.

  • Provisioning - Returned when on-demand compute is being created for a given job submission.

  • Preparing - The run environment is being prepared:

    • docker image build

    • conda environment setup

  • Queued - The job is queued in the compute target. For example, in BatchAI the job is in queued state

    while waiting for all the requested nodes to be ready.

  • Running - The job started to run in the compute target.

  • Finalizing - User code has completed and the run is in post-processing stages.

  • CancelRequested - Cancellation has been requested for the job.

  • Completed - The run completed successfully. This includes both the user code and run

    post-processing stages.

  • Failed - The run failed. Usually the Error property on a run will provide details as to why.

  • Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.

  • NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.


   run = experiment.submit(config)
   while run.get_status() not in ['Completed', 'Failed']: # For example purposes only, not exhaustive
       print('Run {} not in terminal state'.format(run.id))
       time.sleep(10)

get_submitted_run(**kwargs)

DEPRECATED. Use get_context(allow_offline=True, used_for_context_manager=False, **kwargs).

Get the submitted run for this experiment.

get_submitted_run(**kwargs)

Parameters

cls

Returns

The submitted run.

Return type

Run

get_tags()

Fetch the latest set of mutable tags on the run from the service.

get_tags()

Returns

The tags stored on the run object.

Return type

list(experiment, type=None, tags=None, properties=None, status=None, include_children=False, _rehydrate_runs=True)

Get a list of runs in an experiment specified by optional filters.

list(experiment, type=None, tags=None, properties=None, status=None, include_children=False, _rehydrate_runs=True)

Parameters

experiment
Experiment

The containing experiment.

type
str

If specified, returns runs matching specified type.

default value: None
tags
str or dict

If specified, returns runs matching specified "tag" or {"tag": "value"}.

default value: None
properties
str or dict

If specified, returns runs matching specified "property" or {"property": "value"}.

default value: None
status
str

If specified, returns runs with status specified "status".

default value: None
include_children
bool

If set to true, fetch all the runs, not only top-level ones.

default value: False
_rehydrate_runs
bool

If set to True (by default), will use the registered provider to reinstantiate an object for that type instead of the base Run.

default value: True

Returns

list of azureml.core.run.Run

Return type

Remarks


   favorite_completed_runs = Run.list(experiment, status='Completed', tags='favorite')

   all_distinct_runs = Run.list(experiment)
   and_their_children = Run.list(experiment, include_children=True)

   only_script_runs = Run.list(experiment, type=ScriptRun.RUN_TYPE)

list_by_compute(compute, type=None, tags=None, properties=None, status=None)

Get a list of runs in a compute specified by optional filters.

list_by_compute(compute, type=None, tags=None, properties=None, status=None)

Parameters

compute
ComputeTarget

The containing compute.

type
str

If specified, returns runs matching specified type.

default value: None
tags
str or dict

If specified, returns runs matching specified "tag" or {"tag": "value"}.

default value: None
properties
str or dict

If specified, returns runs matching specified "property" or {"property": "value"}.

default value: None
status
str

If specified, returns runs with status specified "status". Only allowed values are "Running" and "Queued".

default value: None

Returns

a generator of ~_restclient.models.RunDto

Return type

builtin.generator

log(name, value, description='')

Log a metric value to the run with the given name.

log(name, value, description='')

Parameters

name
str

The name of metric.

value

The value to be posted to the service.

description
str

An optional metric description.

Remarks

Logging a metric to a run causes that metric to be stored in the run record in the experiment. You can log the same metric multiple times within a run, the result being considered a vector of that metric.

log_accuracy_table(name, value, description='')

Log an accuracy table to the artifact store.

The accuracy table metric is a multi-use, non-scalar metric that can be used to produce multiple types of line charts that vary continuously over the space of predicted probabilities. Examples of these charts are ROC, precision-recall, and lift curves.

The calculation of the accuracy table is similar to the calculation of an ROC curve. An ROC curve stores true positive rates and false positive rates at many different probability thresholds. The accuracy table stores the raw number of true positives, false positives, true negatives, and false negatives at many probability thresholds.

There are two methods used for selecting thresholds: "probability" and "percentile." They differ in how they sample from the space of predicted probabilities.

Probability thresholds are uniformly spaced thresholds between 0 and 1. If NUM_POINTS is 5 the probability thresholds would be [0.0, 0.25, 0.5, 0.75, 1.0].

Percentile thresholds are spaced according to the distribution of predicted probabilities. Each threshold corresponds to the percentile of the data at a probability threshold. For example, if NUM_POINTS is 5, then the first threshold would be at the 0th percentile, the second at the 25th percentile, the third at the 50th, and so on.

The probability tables and percentile tables are both 3D lists where the first dimension represents the class label, the second dimension represents the sample at one threshold (scales with NUM_POINTS), and the third dimension always has 4 values: TP, FP, TN, FN, and always in that order.

The confusion values (TP, FP, TN, FN) are computed with the one vs. rest strategy. See the following link for more details: https://en.wikipedia.org/wiki/Multiclass_classification

N = # of samples in validation dataset (200 in example) M = # thresholds = # samples taken from the probability space (5 in example) C = # classes in full dataset (3 in example)

Some invariants of the accuracy table:

  • TP + FP + TN + FN = N for all thresholds for all classes
  • TP + FN is the same at all thresholds for any class
  • TN + FP is the same at all thresholds for any class
  • Probability tables and percentile tables have shape [C, M, 4]

Note: M can be any value and controls the resolution of the charts This is independent of the dataset, is defined when calculating metrics, and trades off storage space, computation time, and resolution.

Class labels should be strings, confusion values should be integers, and thresholds should be floats.

log_accuracy_table(name, value, description='')

Parameters

name
str

The name of the accuracy table.

value
str or dict

JSON containing name, version, and data properties.

description
str

An optional metric description.

Remarks

Example of a valid JSON value:


   {
       "schema_type": "accuracy_table",
       "schema_version": "v1",
       "data": {
           "probability_tables": [
               [
                   [82, 118, 0, 0],
                   [75, 31, 87, 7],
                   [66, 9, 109, 16],
                   [46, 2, 116, 36],
                   [0, 0, 118, 82]
               ],
               [
                   [60, 140, 0, 0],
                   [56, 20, 120, 4],
                   [47, 4, 136, 13],
                   [28, 0, 140, 32],
                   [0, 0, 140, 60]
               ],
               [
                   [58, 142, 0, 0],
                   [53, 29, 113, 5],
                   [40, 10, 132, 18],
                   [24, 1, 141, 34],
                   [0, 0, 142, 58]
               ]
           ],
           "percentile_tables": [
               [
                   [82, 118, 0, 0],
                   [82, 67, 51, 0],
                   [75, 26, 92, 7],
                   [48, 3, 115, 34],
                   [3, 0, 118, 79]
               ],
               [
                   [60, 140, 0, 0],
                   [60, 89, 51, 0],
                   [60, 41, 99, 0],
                   [46, 5, 135, 14],
                   [3, 0, 140, 57]
               ],
               [
                   [58, 142, 0, 0],
                   [56, 93, 49, 2],
                   [54, 47, 95, 4],
                   [41, 10, 132, 17],
                   [3, 0, 142, 55]
               ]
           ],
           "probability_thresholds": [0.0, 0.25, 0.5, 0.75, 1.0],
           "percentile_thresholds": [0.0, 25.0, 50.0, 75.0, 100.0],
           "class_labels": ["0", "1", "2"]
       }
   }

log_confusion_matrix(name, value, description='')

Log a confusion matrix to the artifact store.

This logs a wrapper around the sklearn confusion matrix. The metric data contains the class labels and a 2D list for the matrix itself. See the following link for more details on how the metric is computed: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html

log_confusion_matrix(name, value, description='')

Parameters

name
str

The name of the confusion matrix.

value
str or dict

JSON containing name, version, and data properties.

description
str

An optional metric description.

Remarks

Example of a valid JSON value:


   {
       "schema_type": "confusion_matrix",
       "schema_version": "v1",
       "data": {
           "class_labels": ["0", "1", "2", "3"],
           "matrix": [
               [3, 0, 1, 0],
               [0, 1, 0, 1],
               [0, 0, 1, 0],
               [0, 0, 0, 1]
           ]
       }
   }

log_image(name, path=None, plot=None, description='')

Log an image metric to the run record.

log_image(name, path=None, plot=None, description='')

Parameters

name
str

The name of the metric.

path
str

The path or stream of the image.

plot
matplotlib.pyplot

The plot to log as an image.

description
str

An optional metric description.

Remarks

Use this method to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record.

log_list(name, value, description='')

Log a list of metric values to the run with the given name.

log_list(name, value, description='')

Parameters

name
str

The name of metric.

value
list

The values of the metric.

description
str

An optional metric description.

log_predictions(name, value, description='')

Log predictions to the artifact store.

This logs a metric score that can be used to compare the distributions of true target values to the distribution of predicted values for a regression task.

The predictions are binned and standard deviations are calculated for error bars on a line chart.

log_predictions(name, value, description='')

Parameters

name
str

The name of the predictions.

value
str or dict

JSON containing name, version, and data properties.

description
str

An optional metric description.

Remarks

Example of a valid JSON value:


   {
       "schema_type": "predictions",
       "schema_version": "v1",
       "data": {
           "bin_averages": [0.25, 0.75],
           "bin_errors": [0.013, 0.042],
           "bin_counts": [56, 34],
           "bin_edges": [0.0, 0.5, 1.0]
       }
   }

log_residuals(name, value, description='')

Log residuals to the artifact store.

This logs the data needed to display a histogram of residuals for a regression task. The residuals are predicted - actual.

There should be one more edge than the number of counts. Please see the numpy histogram documentation for examples of using counts and edges to represent a histogram. https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html

log_residuals(name, value, description='')

Parameters

name
str

The name of the residuals.

value
str or dict

JSON containing name, version, and data properties.

description
str

An optional metric description.

Remarks

Example of a valid JSON value:


   {
       "schema_type": "residuals",
       "schema_version": "v1",
       "data": {
           "bin_edges": [50, 100, 200, 300, 350],
           "bin_counts": [0.88, 20, 30, 50.99]
       }
   }

log_row(name, description=None, **kwargs)

Log a row metric to the run with the given name.

log_row(name, description=None, **kwargs)

Parameters

name
str

The name of metric.

description
str

An optional metric description.

default value: None
kwargs
dict

A dictionary of additional parameters. In this case, the columns of the metric.

Remarks

Using log_row creates a table metric with columns as described in kwargs. Each named parameter generates a column with the value specified. log_row can be be called once to log an arbitrary tuple, or multiple times in a loop to generate a complete table.


   citrus = ['orange', 'lemon', 'lime']
   sizes = [ 10, 7, 3]
   for index in range(len(citrus)):
       run.log_row("citrus", fruit = citrus[index], size=sizes[index])

log_table(name, value, description='')

Log a table metric to the run with the given name.

log_table(name, value, description='')

Parameters

name
str

The name of metric.

value
dict

The table value of the metric, a dictionary where keys are columns to be posted to the service.

description
str

An optional metric description.

register_model(model_name, model_path=None, tags=None, properties=None, model_framework=None, model_framework_version=None, description=None, datasets=None, sample_input_dataset=None, sample_output_dataset=None, resource_configuration=None, **kwargs)

Register a model for operationalization.

register_model(model_name, model_path=None, tags=None, properties=None, model_framework=None, model_framework_version=None, description=None, datasets=None, sample_input_dataset=None, sample_output_dataset=None, resource_configuration=None, **kwargs)

Parameters

model_name
str

The name of the model.

model_path
str

The relative cloud path to model from outputs/ dir. When not specified (None), model_name is used as the path.

default value: None
tags
dict[str, str]

A dictionary of key value tags to assign to the model.

default value: None
properties
dict[str, str]

A dictionary of key value properties to assign to the model. These properties cannot be changed after model creation, however new key value pairs can be added.

default value: None
model_framework
str

The framework of the model to register. Currently supported frameworks: TensorFlow, ScikitLearn, Onnx, Custom

default value: None
model_framework_version
str

The framework version of the registered model.

default value: None
description
str

An optional description of the model.

default value: None
datasets
list[(str, AbstractDataset)]

A list of tuples where the first element describes the dataset-model relationship and the second element is the dataset.

default value: None
sample_input_dataset
AbstractDataset

Optional, Sample input dataset for the registered model

default value: None
sample_output_dataset
AbstractDataset

Optional, Sample output dataset for the registered model

default value: None
resource_configuration
ResourceConfiguration

Optional, Resource configuration to run the registered model

default value: None
kwargs
dict
kwargs
dict

Optional parameters.

Returns

The registered model.

Return type

Remarks


   model = best_run.register_model(model_name = 'best_model', model_path = 'outputs/model.pkl')

remove_tags(tags)

Delete the list of mutable tags on this run.

remove_tags(tags)

Parameters

tags

Returns

The tags stored on the run object

restore_snapshot(snapshot_id=None, path=None)

Restore a snapshot as a ZIP file. Returns the path to the ZIP.

restore_snapshot(snapshot_id=None, path=None)

Parameters

snapshot_id
str

The snapshot ID to restore. The latest is used if not specified.

default value: None
path
str

The path where the downloaded ZIP is saved.

default value: None

Returns

The path.

Return type

str

set_tags(tags)

Add or modify a set of tags on the run. Tags not passed in the dictionary are left untouched.

You can also add simple string tags. When these tags appear in the tag dictionary as keys, they have a value of None. For more information, see Tag and find runs.

set_tags(tags)

Parameters

tags
dict[str] or str

The tags stored in the run object.

start()

Mark the run as started.

This is typically used in advanced scenarios when the run has been created by another actor.

start()

submit_child(config, tags=None, **kwargs)

Submit an experiment and return the active child run.

submit_child(config, tags=None, **kwargs)

Parameters

config
object

The config to be submitted.

tags
dict

Tags to be added to the submitted run, e.g., {"tag": "value"}.

default value: None
kwargs
dict

Additional parameters used in submit function for configurations.

Returns

A run object.

Return type

Run

Remarks

Submit is an asynchronous call to the Azure Machine Learning platform to execute a trial on local or remote hardware. Depending on the configuration, submit will automatically prepare your execution environments, execute your code, and capture your source code and results into the experiment's run history.

To submit an experiment you first need to create a configuration object describing how the experiment is to be run. The configuration depends on the type of trial required.

An example of how to submit a child experiment from your local machine using ScriptRunConfig is as follows:


   from azureml.core import ScriptRunConfig

   # run a trial from the train.py code in your current directory
   config = ScriptRunConfig(source_directory='.', script='train.py',
       run_config=RunConfiguration())
   run = parent_run.submit_child(config)

   # get the url to view the progress of the experiment and then wait
   # until the trial is complete
   print(run.get_portal_url())
   run.wait_for_completion()

For details on how to configure a run, see:

tag(key, value=None)

Tag the run with a string key and optional string value.

tag(key, value=None)

Parameters

key
str

The tag key

value
str

An optional value for the tag

default value: None

Remarks

Tags and Properties on a run are both dictionaries of string -> string. The difference between them is mutability: Tags can be set, updated, and deleted while Properties can only be added. This makes Properties more appropriate for system/workflow related behavior triggers, while Tags are generally user-facing and meaningful for the consumers of the experiment.


   run = experiment.start_logging()
   run.tag('DeploymentCandidate')
   run.tag('modifiedBy', 'Master CI')
   run.tag('modifiedBy', 'release pipeline') # Careful, tags are mutable

   run.add_properties({'BuildId': os.environ.get('VSTS_BUILD_ID')}) # Properties are not

   tags = run.get_tags()
   # tags = { 'DeploymentCandidate': None, 'modifiedBy': 'release pipeline' }

take_snapshot(file_or_folder_path)

Save a snapshot of the input file or folder.

take_snapshot(file_or_folder_path)

Parameters

file_or_folder_path
str

The file or folder containing the run source code.

Returns

Returns the snapshot ID.

Return type

str

Remarks

Snapshots are intended to be the source code used to execute the experiment run. These are stored with the run so that the run trial can be replicated in the future.

Note

Snapshots are automatically taken when submit(config, tags=None, **kwargs) is called.

Typically, this the take_snapshot method is only required for interactive (notebook) runs.

upload_file(name, path_or_stream)

Upload a file to the run record.

upload_file(name, path_or_stream)

Parameters

name
str

The name of the file to upload.

path_or_stream
str

The relative local path or stream to the file to upload.

Return type

Remarks


   run = experiment.start_logging()
   run.upload_file(name='important_file', path="path/on/disk/file.txt")

Note

Runs automatically capture file in the specified output directory, which defaults

to "./outputs" for most run types. Use upload_file only when additional files need

to be uploaded or an output directory is not specified.

upload_files(names, paths, return_artifacts=False, timeout_seconds=None)

Upload files to the run record.

upload_files(names, paths, return_artifacts=False, timeout_seconds=None)

Parameters

names
list

The names of the files to upload. If set, paths must also be set.

paths
list

The relative local paths to the files to upload. If set, names is required.

return_artifacts
bool

Indicates that an Artifact object should be returned for each file uploaded.

timeout_seconds
int

The timeout for uploading files.

Remarks

upload_files has the same effect as upload_file on separate files, however there are performance and resource utilization benefits when using upload_files.


   import os

   run = experiment.start_logging()
   file_name_1 = 'important_file_1'
   file_name_2 = 'important_file_2'
   run.upload_files(names=[file_name_1, file_name_2],
                       paths=['path/on/disk/file_1.txt', 'other/path/on/disk/file_2.txt'])

   run.download_file(file_name_1, 'file_1.txt')

   os.mkdir("path")  # The path must exist
   run.download_file(file_name_2, 'path/file_2.txt')

Note

Runs automatically capture files in the specified output directory, which defaults

to "./outputs" for most run types. Use upload_files only when additional files need

to be uploaded or an output directory is not specified.

upload_folder(name, path)

Upload the specified folder to the given prefix name.

upload_folder(name, path)

Parameters

name
str

The name of the folder of files to upload.

folder
str

The relative local path to the folder to upload.

Remarks


   run = experiment.start_logging()
   run.upload_folder(name='important_files', path='path/on/disk')

   run.download_file('important_files/existing_file.txt', 'local_file.txt')

Note

Runs automatically capture files in the specified output directory, which defaults

to "./outputs" for most run types. Use upload_folder only when additional files need

to be uploaded or an output directory is not specified.

wait_for_completion(show_output=False, wait_post_processing=False, raise_on_error=True)

Wait for the completion of this run. Returns the status object after the wait.

wait_for_completion(show_output=False, wait_post_processing=False, raise_on_error=True)

Parameters

show_output
bool

Indicates whether to show the run output on sys.stdout.

default value: False
wait_post_processing
bool

Indicates whether to wait for the post processing to complete after the run completes.

default value: False
raise_on_error
bool

Indicates whether an Error is raised when the Run is in a failed state.

default value: True

Returns

The status object.

Return type

Attributes

experiment

Get experiment containing the run.

Returns

Retrieves the experiment corresponding to the run.

Return type

id

Get run ID.

The ID of the run - an identifier unique across the containing experiment.

Returns

The run ID.

Return type

str

input_datasets

Return the dictionary for input datasets.

Returns

A dictionary with the name being the dataset's name and value being the delivered data. If the mode is set to mount or download, it will return the base path of the delivered data. If the mode is set to direct, it will return the dataset object.

Return type

InputDatasets

name

Return the run name.

The optional name of the run - a user-set name for later identification.

Returns

The run ID.

Return type

str

number

Get run number.

A monotonically increasing number representing the order of runs within an experiment.

Returns

The run number.

Return type

int

parent

Fetch the parent run for this run from the service.

Runs can have an optional parent, resulting in a potential tree hierarchy of runs.

Returns

The parent run, or None if one is not set.

Return type

Run

properties

Return the immutable properties of this run.

Returns

The locally cached properties of the run.

Return type

str

Remarks

Properties include immutable system-generated information such as duration, date of execution, user, etc.

status

Return the run object's status.

tags

Return the set of mutable tags on this run.

Returns

The tags stored on the run object.

Return type

type

Get run type.

Indicates how the run was created or configured.

Returns

The run type.

Return type

str