Experiment class

Definition

The main entry point into experimenting with Azure Machine Learning service.

The class acts as a container of trials that represent multiple model runs.

Experiment(workspace, name, _skip_name_validation=False, _id=None, _archived_time=None, _create_in_cloud=True, _experiment_dto=None, **kwargs)
Inheritance
builtins.object
azureml._logging.chained_identity.ChainedIdentity
Experiment
azureml.core._portal.HasWorkspacePortal
azureml.core._portal.HasExperimentPortal
Experiment

Parameters

workspace
Workspace

The workspace object containing the experiment.

name
str

The experiment name.

kwargs
dict

dictionary of keyword args

Remarks

An Azure Machine Learning experiment represent the collection of trials used to validate a user's hypothesis.

In Azure Machine Learning, an experiment is represented by the Experiment class and a trial is represented by the Run class.

To get or create an experiment from a workspace, you request the experiment using the experiment name. Experiment name must be 3-36 characters, start with a letter or a number, and can only contain letters, numbers, underscores, and dashes.


   experiment = Experiment(workspace, "MyExperiment")

If the experiment is not found in the workspace, a new experiment is created.

There are two ways to execute an experiment trial. If you are interactively experimenting in a Jupyter notebook, use start_logging(*args, **kwargs) If you are submitting an experiment from source code or some other type of configured trial, use submit(config, tags=None, **kwargs)

Both mechanisms create a Run object. In interactive scenarios, use logging methods such as log(name, value, description='') to add measurements and metrics to the trial record. In configured scenarios use status methods such as get_status() to retrieve information about the run.

In both cases you can use query methods like get_metrics(name=None, recursive=False, run_type=None, populate=False) to retrieve the current values, if any, of any trial measurements and metrics.

Methods

archive()

Archive an experiment.

from_directory(path, auth=None)

(Deprecated) Load an experiment from the specified path.

get_docs_url()

Url to the documentation for this class.

get_runs(type=None, tags=None, properties=None, include_children=False)

Return a generator of the runs for this experiment, in reverse chronological order.

list(workspace, experiment_name=None, view_type='ActiveOnly', tags=None)

Return the list of experiments in the workspace.

reactivate(new_name=None)

Reactivates an archived experiment.

refresh()

Return the most recent version of the experiment from the cloud.

remove_tags(tags)

Delete the specified tags from the experiment.

set_tags(tags)

Add or modify a set of tags on the experiment. Tags not passed in the dictionary are left untouched.

start_logging(*args, **kwargs)

Start an interactive logging session in the specified experiment.

submit(config, tags=None, **kwargs)

Submit an experiment and return the active created run.

tag(key, value=None)

Tag the experiment with a string key and optional string value.

archive()

Archive an experiment.

archive()

Remarks

After archival, the experiment will not be listed by default. Attempting to write to an archived experiment will create a new active experiment with the same name. An archived experiment can be restored by calling reactivate(new_name=None) as long as there is not another active experiment with the same name.

from_directory(path, auth=None)

(Deprecated) Load an experiment from the specified path.

from_directory(path, auth=None)

Parameters

path
str

Directory containing the experiment configuration files

auth
ServicePrincipalAuthentication or InteractiveLoginAuthentication

The auth object. If None the default Azure CLI credentials will be used or the API will prompt for credentials.

default value: None

Returns

Returns the Experiment

Return type

get_docs_url()

Url to the documentation for this class.

get_docs_url()

Parameters

cls

Returns

url

Return type

str

get_runs(type=None, tags=None, properties=None, include_children=False)

Return a generator of the runs for this experiment, in reverse chronological order.

get_runs(type=None, tags=None, properties=None, include_children=False)

Parameters

type
string

Filter the returned generator of runs by the provided type. See add_type_provider(runtype, run_factory) for creating run types.

default value: None
tags
string or dict

Filter runs by "tag" or {"tag": "value"}

default value: None
properties
string or dict

Filter runs by "property" or {"property": "value"}

default value: None
include_children
bool

By default, fetch only top-level runs. Set to true to list all runs

default value: False

Returns

The list of runs matching supplied filters

Return type

list(workspace, experiment_name=None, view_type='ActiveOnly', tags=None)

Return the list of experiments in the workspace.

list(workspace, experiment_name=None, view_type='ActiveOnly', tags=None)

Parameters

workspace
Workspace

The workspace from which to list the experiments.

experiment_name
str

Optional name to filter experiments.

default value: None
view_type
ViewType

Optional view_type enum to filter or include archived experiments.

default value: ActiveOnly
tags

Optional tag key or dictionary of tag key-value pairs to filter experiments on

default value: None

Returns

list of experiment objects.

Return type

reactivate(new_name=None)

Reactivates an archived experiment.

reactivate(new_name=None)

Parameters

new_name
str

Optional new name for renaming an archived experiment during reactivation.

Remarks

An archived experiment can only be reactivated if there is not another active experiment with the same name. In the case of a name conflict, the archived experiment can be renamed when calling reactivate by passing new_name.

refresh()

Return the most recent version of the experiment from the cloud.

refresh()

remove_tags(tags)

Delete the specified tags from the experiment.

remove_tags(tags)

Parameters

tags
[str]

The tag keys that will get removed

set_tags(tags)

Add or modify a set of tags on the experiment. Tags not passed in the dictionary are left untouched.

set_tags(tags)

Parameters

tags
dict[str] = str

The tags stored in the experiment object

start_logging(*args, **kwargs)

Start an interactive logging session in the specified experiment.

start_logging(*args, **kwargs)

Parameters

experiment
Experiment

The experiment.

outputs
str

Optional outputs directory to track.

snapshot_directory
str

Optional directory to take snapshot of. Setting to None will take no snapshot.

args
list
kwargs
dict

Returns

Return a started run.

Return type

Run

Remarks

start_logging creates an interactive run for use in scenarios such as Jupyter notebooks. Any metrics that are logged during the session are added to the run record in the experiment. If an output directory is specified, the contents of that directory is uploaded as run artifacts upon run completion.


   experiment = Experiment(workspace, "My Experiment")
   run = experiment.start_logging(outputs=None, snapshot_directory=".")
   ...
   run.log_metric("Accuracy", accuracy)
   run.complete()

Note

run_id is automatically generated for each run and is unique within the experiment.

submit(config, tags=None, **kwargs)

Submit an experiment and return the active created run.

submit(config, tags=None, **kwargs)

Parameters

config
object

The config to be submitted

tags
dict

Tags to be added to the submitted run, {"tag": "value"}

default value: None
kwargs
dict

Additional parameters used in submit function for configurations

Returns

run

Return type

Run

Remarks

Submit is an asynchronous call to the Azure Machine Learning platform to execute a trial on local or remote hardware. Depending on the configuration, submit will automatically prepare your execution environments, execute your code, and capture your source code and results into the experiment's run history.

To submit an experiment you first need to create a configuration object describing how the experiment is to be run. The configuration depends on the type of trial required.

An example of how to submit an experiment from your local machine is as follows:


   from azureml.core import ScriptRunConfig

   # run a trial from the train.py code in your current directory
   config = ScriptRunConfig(source_directory='.', script='train.py',
       run_config=RunConfiguration())
   run = experiment.submit(config)

   # get the url to view the progress of the experiment and then wait
   # until the trial is complete
   print(run.get_portal_url())
   run.wait_for_completion()

For details on how to configure a run, see the configuration type details.

Note

When you submit the training run, a snapshot of the directory that contains your training scripts

is created and sent to the compute target. It is also stored as part of the experiment in your

workspace. If you change files and submit the run again, only the changed files will be uploaded.

D:\a\1\s\source_code\0\azureml_core-1.0.74\doc\azureml.core.rst:2: (INFO/1) Duplicate explicit target name: ".gitignore".

To prevent files from being included in the snapshot, create a

.gitignore

or .amlignore file in the directory and add the

files to it. The .amlignore file uses the same syntax and patterns as the

.gitignore file. If both files exist, the .amlignore file

takes precedence.

For more information, see Snapshots

tag(key, value=None)

Tag the experiment with a string key and optional string value.

tag(key, value=None)

Parameters

key
str

The tag key

value
str

An optional value for the tag

Remarks

Tags on an experiment are stored in a dictionary with string keys and string values. Tags can be set, updated and deleted. Tags are user-facing and generally contain meaning information for the consumers of the experiment.


   experiment.tag('')
   experiment.tag('DeploymentCandidate')
   experiment.tag('modifiedBy', 'Master CI')
   experiment.tag('modifiedBy', 'release pipeline') # Careful, tags are mutable

Attributes

archived_time

Return the archived time for the experiment. Value should be None for an active experiment.

Returns

The archived time of the experiment.

Return type

str

id

Return id of the experiment.

Returns

The id of the experiment.

Return type

str

name

Return name of the experiment.

Returns

The name of the experiment.

Return type

str

tags

Return the mutable set of tags on the experiment.

Returns

The tags on an experiment.

Return type

workspace

Return the workspace containing the experiment.

Returns

Returns the workspace object.

Return type

workspace_object

(Deprecated) Return the workspace containing the experiment.

Returns

The workspace object.

Return type