Experiment class

Definition

The main entry point into experimenting with Azure Machine Learning service.

The class acts as a container of trials that represent multiple model runs.

Experiment(workspace, name, _skip_name_validation=False, **kwargs)
Inheritance
builtins.object
azureml._logging.chained_identity.ChainedIdentity
Experiment
azureml.core._portal.HasWorkspacePortal
azureml.core._portal.HasExperimentPortal
Experiment

Parameters

workspace
Workspace

The workspace object containing the experiment.

name
str

The experiment name.

kwargs
dict

Remarks

An Azure Machine Learning experiment represent the collection of trials used to validate a user's hypothesis.

In Azure Machine Learning, an experiment is represented by the Experiment class and a trial is represented by the Run class.

To get or create an experiment from a workspace, you request the experiment using the experiment name. Experiment name must be 3-36 characters, start with a letter or a number, and can only contain letters, numbers, underscores, and dashes.


   experiment = Experiment(workspace, "MyExperiment")

If the experiment is not found in the workspace, a new experiment is created when an experiment trial is executed.

There are two ways to execute an experiment trial. If you are interactively experimenting in a Jupyter notebook, use start_logging(*args, **kwargs) If you are submitting an experiment from source code or some other type of configured trial, use submit(config, tags=None, **kwargs)

Both mechanisms create a Run object. In interactive scenarios, use logging methods such as log(name, value, description='') to add measurements and metrics to the trial record. In configured scenarios use status methods such as get_status() to retrieve information about the run.

In both cases you can use query methods like get_metrics(name=None, recursive=False, run_type=None, populate=False) to retrieve the current values, if any, of any trial measurements and metrics.

Methods

from_directory(path, auth=None)

(Deprecated) Load an experiment from the specified path.

get_docs_url()

Url to the documentation for this class.

get_runs(type=None, tags=None, properties=None, include_children=False)

Return a generator of the runs for this experiment, in reverse chronological order.

list(workspace)

Return the list of experiments in the workspace.

start_logging(*args, **kwargs)

Start an interactive logging session in the specified experiment.

submit(config, tags=None, **kwargs)

Submit an experiment and return the active created run.

from_directory(path, auth=None)

(Deprecated) Load an experiment from the specified path.

from_directory(path, auth=None)

Parameters

path
str

Directory containing the experiment configuration files

auth
ServicePrincipalAuthentication or InteractiveLoginAuthentication

The auth object. If None the default Azure CLI credentials will be used or the API will prompt for credentials.

default value: None

Returns

Returns the Experiment

Return type

get_docs_url()

Url to the documentation for this class.

get_docs_url()

Parameters

cls

Returns

url

Return type

str

get_runs(type=None, tags=None, properties=None, include_children=False)

Return a generator of the runs for this experiment, in reverse chronological order.

get_runs(type=None, tags=None, properties=None, include_children=False)

Parameters

type
string

Filter the returned generator of runs by the provided type. See add_type_provider(runtype, run_factory) for creating run types.

default value: None
tags
string or dict

Filter runs by "tag" or {"tag": "value"}

default value: None
properties
string or dict

Filter runs by "property" or {"property": "value"}

default value: None
include_children
bool

By default, fetch only top-level runs. Set to true to list all runs

default value: False

Returns

The list of runs matching supplied filters

Return type

generator[Run]

list(workspace)

Return the list of experiments in the workspace.

list(workspace)

Parameters

workspace
Workspace

The workspace from which to list the experiments..

Returns

list of experiment objects.

Return type

<xref:list[Experiment]>

start_logging(*args, **kwargs)

Start an interactive logging session in the specified experiment.

start_logging(*args, **kwargs)

Parameters

experiment
Experiment

The experiment.

args
<xref:azureml.core.Experiment.list>
kwargs
dict

Returns

Return a started run.

Return type

Run

Remarks

start_logging creates an interactive run for use in scenarios such as Jupyter notebooks. Any metrics that are logged during the session are added to the run record in the experiment. If an output directory is specified, the contents of that directory is uploaded as run artifacts upon run completion.


   experiment = Experiment(workspace, "My Experiment")
   run = experiment.start_logging()
   ...
   run.log_metric("Accuracy", accuracy)
   run.complete()

Note

run_id and name are not typically specified. run_id is automatically

generated for each run and is unique within the experiment; name is typically

used only for child run scenarios where a run has distinct "parts".

submit(config, tags=None, **kwargs)

Submit an experiment and return the active created run.

submit(config, tags=None, **kwargs)

Parameters

config
object

The config to be submitted

tags
dict

Tags to be added to the submitted run, {"tag": "value"}

default value: None
kwargs
dict

Returns

run

Return type

Run

Remarks

Submit is an asynchronous call to the Azure Machine Learning platform to execute a trial on local or remote hardware. Depending on the configuration, submit will automatically prepare your execution environments, execute your code, and capture your source code and results into the experiment's run history.

To submit an experiment you first need to create a configuration object describing how the experiment is to be run. The configuration depends on the type of trial required.

An example of how to submit an experiment from your local machine is as follows:


   from azureml.core import ScriptRunConfig

   # run a trial from the train.py code in your current directory
   config = ScriptRunConfig(source_directory='.', script='train.py',
       run_config=RunConfiguration())
   run = experiment.submit(config)

   # get the url to view the progress of the experiment and then wait
   # until the trial is complete
   print(run.get_portal_url())
   run.wait_for_completion()

For details on how to configure a run, see the configuration type details.

Attributes

name

Return name of the experiment.

Returns

The name of the experiment.

Return type

str

workspace

Return the workspace containing the experiment.

Returns

Returns the workspace object.

Return type

workspace_object

(Deprecated) Return the workspace containing the experiment.

Returns

The workspace object.

Return type