Experiment Class

Represents the main entry point for creating and working with experiments in Azure Machine Learning.

An Experiment is a container of trials that represent multiple model runs.

Experiment constructor.

Inheritance
azureml._logging.chained_identity.ChainedIdentity
Experiment
azureml.core._portal.HasExperimentPortal
Experiment

Constructor

Experiment(workspace, name, _skip_name_validation=False, _id=None, _archived_time=None, _create_in_cloud=True, _experiment_dto=None, **kwargs)

Parameters

workspace
Workspace
Required

The workspace object containing the experiment.

name
str
Required

The experiment name.

kwargs
dict
Required

A dictionary of keyword args.

workspace
Workspace
Required

The workspace object containing the experiment.

name
str
Required

The experiment name.

kwargs
dict
Required

A dictionary of keyword args.

_skip_name_validation
default value: False
_id
default value: None
_archived_time
default value: None
_create_in_cloud
default value: True
_experiment_dto
default value: None

Remarks

An Azure Machine Learning experiment represent the collection of trials used to validate a user's hypothesis.

In Azure Machine Learning, an experiment is represented by the Experiment class and a trial is represented by the Run class.

To get or create an experiment from a workspace, you request the experiment using the experiment name. Experiment name must be 3-36 characters, start with a letter or a number, and can only contain letters, numbers, underscores, and dashes.


   experiment = Experiment(workspace, "MyExperiment")

If the experiment is not found in the workspace, a new experiment is created.

There are two ways to execute an experiment trial. If you are interactively experimenting in a Jupyter Notebook, use start_logging If you are submitting an experiment from source code or some other type of configured trial, use submit

Both mechanisms create a Run object. In interactive scenarios, use logging methods such as log to add measurements and metrics to the trial record. In configured scenarios use status methods such as get_status to retrieve information about the run.

In both cases you can use query methods like get_metrics to retrieve the current values, if any, of any trial measurements and metrics.

Methods

archive

Archive an experiment.

delete

Delete an experiment in the workspace.

from_directory

(Deprecated) Load an experiment from the specified path.

get_docs_url

Url to the documentation for this class.

get_runs

Return a generator of the runs for this experiment, in reverse chronological order.

list

Return the list of experiments in the workspace.

reactivate

Reactivates an archived experiment.

refresh

Return the most recent version of the experiment from the cloud.

remove_tags

Delete the specified tags from the experiment.

set_tags

Add or modify a set of tags on the experiment. Tags not passed in the dictionary are left untouched.

start_logging

Start an interactive logging session and create an interactive run in the specified experiment.

submit

Submit an experiment and return the active created run.

tag

Tag the experiment with a string key and optional string value.

archive

Archive an experiment.

archive()

Remarks

After archival, the experiment will not be listed by default. Attempting to write to an archived experiment will create a new active experiment with the same name. An archived experiment can be restored by calling reactivate as long as there is not another active experiment with the same name.

delete

Delete an experiment in the workspace.

static delete(workspace, experiment_id)

Parameters

workspace
Workspace
Required

The workspace which the experiment belongs to.

experiment_id
Required

The experiment id of the experiment to be deleted.

from_directory

(Deprecated) Load an experiment from the specified path.

static from_directory(path, auth=None)

Parameters

path
str
Required

Directory containing the experiment configuration files.

auth
ServicePrincipalAuthentication or InteractiveLoginAuthentication
default value: None

The auth object. If None the default Azure CLI credentials will be used or the API will prompt for credentials.

Returns

Returns the Experiment

Return type

get_docs_url

Url to the documentation for this class.

get_docs_url()

Returns

url

Return type

str

get_runs

Return a generator of the runs for this experiment, in reverse chronological order.

get_runs(type=None, tags=None, properties=None, include_children=False)

Parameters

type
string
default value: None

Filter the returned generator of runs by the provided type. See add_type_provider for creating run types.

tags
string or dict
default value: None

Filter runs by "tag" or {"tag": "value"}.

properties
string or dict
default value: None

Filter runs by "property" or {"property": "value"}

include_children
bool
default value: False

By default, fetch only top-level runs. Set to true to list all runs.

Returns

The list of runs matching supplied filters.

Return type

list

Return the list of experiments in the workspace.

static list(workspace, experiment_name=None, view_type='ActiveOnly', tags=None)

Parameters

workspace
Workspace
Required

The workspace from which to list the experiments.

experiment_name
str
default value: None

Optional name to filter experiments.

view_type
ViewType
default value: ActiveOnly

Optional enum value to filter or include archived experiments.

tags
default value: None

Optional tag key or dictionary of tag key-value pairs to filter experiments on.

Returns

A list of experiment objects.

Return type

reactivate

Reactivates an archived experiment.

reactivate(new_name=None)

Parameters

new_name
str
Required

Not supported anymore

Remarks

An archived experiment can only be reactivated if there is not another active experiment with the same name.

refresh

Return the most recent version of the experiment from the cloud.

refresh()

remove_tags

Delete the specified tags from the experiment.

remove_tags(tags)

Parameters

tags
[str]
Required

The tag keys that will get removed

set_tags

Add or modify a set of tags on the experiment. Tags not passed in the dictionary are left untouched.

set_tags(tags)

Parameters

tags
dict[str]
Required

The tags stored in the experiment object

start_logging

Start an interactive logging session and create an interactive run in the specified experiment.

start_logging(*args, **kwargs)

Parameters

experiment
Experiment
Required

The experiment.

outputs
str
Required

Optional outputs directory to track. For no outputs, pass False.

snapshot_directory
str
Required

Optional directory to take snapshot of. Setting to None will take no snapshot.

args
list
Required
kwargs
dict
Required

Returns

Return a started run.

Return type

Run

Remarks

start_logging creates an interactive run for use in scenarios such as Jupyter Notebooks. Any metrics that are logged during the session are added to the run record in the experiment. If an output directory is specified, the contents of that directory is uploaded as run artifacts upon run completion.


   experiment = Experiment(workspace, "My Experiment")
   run = experiment.start_logging(outputs=None, snapshot_directory=".", display_name="My Run")
   ...
   run.log_metric("Accuracy", accuracy)
   run.complete()

Note

run_id is automatically generated for each run and is unique within the experiment.

submit

Submit an experiment and return the active created run.

submit(config, tags=None, **kwargs)

Parameters

config
object
Required

The config to be submitted.

tags
dict
default value: None

Tags to be added to the submitted run, {"tag": "value"}.

kwargs
dict
Required

Additional parameters used in submit function for configurations.

Returns

A run.

Return type

Run

Remarks

Submit is an asynchronous call to the Azure Machine Learning platform to execute a trial on local or remote hardware. Depending on the configuration, submit will automatically prepare your execution environments, execute your code, and capture your source code and results into the experiment's run history.

To submit an experiment you first need to create a configuration object describing how the experiment is to be run. The configuration depends on the type of trial required.

An example of how to submit an experiment from your local machine is as follows:


   from azureml.core import ScriptRunConfig

   # run a trial from the train.py code in your current directory
   config = ScriptRunConfig(source_directory='.', script='train.py',
       run_config=RunConfiguration())
   run = experiment.submit(config)

   # get the url to view the progress of the experiment and then wait
   # until the trial is complete
   print(run.get_portal_url())
   run.wait_for_completion()

For details on how to configure a run, see the configuration type details.

  • ScriptRunConfig

  • azureml.train.automl.automlconfig.AutoMLConfig

  • azureml.pipeline.core.Pipeline

  • azureml.pipeline.core.PublishedPipeline

  • azureml.pipeline.core.PipelineEndpoint

Note

When you submit the training run, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the run again, only the changed files will be uploaded.

To prevent files from being included in the snapshot, create a .gitignore or .amlignore file in the directory and add the files to it. The .amlignore file uses the same syntax and patterns as the .gitignore file. If both files exist, the .amlignore file takes precedence.

For more information, see Snapshots.

tag

Tag the experiment with a string key and optional string value.

tag(key, value=None)

Parameters

key
str
Required

The tag key

value
str
Required

An optional value for the tag

Remarks

Tags on an experiment are stored in a dictionary with string keys and string values. Tags can be set, updated and deleted. Tags are user-facing and generally contain meaning information for the consumers of the experiment.


   experiment.tag('')
   experiment.tag('DeploymentCandidate')
   experiment.tag('modifiedBy', 'Master CI')
   experiment.tag('modifiedBy', 'release pipeline') # Careful, tags are mutable

Attributes

archived_time

Return the archived time for the experiment. Value should be None for an active experiment.

Returns

The archived time of the experiment.

Return type

str

id

Return id of the experiment.

Returns

The id of the experiment.

Return type

str

name

Return name of the experiment.

Returns

The name of the experiment.

Return type

str

tags

Return the mutable set of tags on the experiment.

Returns

The tags on an experiment.

Return type

workspace

Return the workspace containing the experiment.

Returns

Returns the workspace object.

Return type

workspace_object

(Deprecated) Return the workspace containing the experiment.

Use the workspace attribute.

Returns

The workspace object.

Return type