PipelineStep class

Definition

Represents an execution step in an Azure Machine Learning pipeline.

Pipelines are constructed from multiple pipeline steps, which are distinct computational units in the pipeline. Each step can run independently and use isolated compute resources. Each step typically has its own named inputs, outputs, and parameters.

The PipelineStep class is the base class from which other built-in step classes designed for common scenarios inherit, such as PythonScriptStep, DataTransferStep, and HyperDriveStep.

For an overview of how Pipelines and PipelineSteps relate, see What are ML Pipelines.

PipelineStep(name, inputs, outputs, arguments=None, fix_port_name_collisions=False, resource_inputs=None)
Inheritance
builtins.object
PipelineStep

Parameters

name
str

The name of the pipeline step.

inputs
list

The list of step inputs.

outputs
list

The list of step outputs.

arguments
list

An optional list of arguments to pass to a script used in the step.

fix_port_name_collisions
bool

Specifies whether to fix name collisions. If True and an input and output have the same name, then the input is prefixed with "INPUT". The default is False.

resource_inputs
list

An optional list of inputs to be used as resources. Resources are downloaded to the script folder and provide a way to change the behavior of script at run-time.

Remarks

A PipelineStep is a unit of execution that typically needs a target of execution (compute target), a script to execute with optional script arguments and inputs, and can produce outputs. The step also could take a number of other parameters specific to the step.

Pipeline steps can be configured together to construct a Pipeline, which represents a shareable and reusable Azure Machine Learning workflow. Each step of a pipeline can be configured to allow reuse of its previous run results if the step contents (scripts/dependencies) as well as inputs and parameters remain unchanged. When reusing the step, instead of submitting the job to compute, the results from the previous run are immediately made available to any subsequent steps.

Azure Machine Learning Pipelines provides built-in steps for common scenarios. For examples, see the steps package and the AutoMLStep class. For an overview on constructing a Pipeline based on pre-built steps, see https://aka.ms/pl-first-pipeline.

Pre-built steps derived from PipelineStep are steps that are used in one pipeline. If your use machine learning workflow calls for creating steps that can be versioned and used across different pipelines, then use the Module class.

Keep the following in mind when working with pipeline steps, input/output data, and step reuse.

  • It is recommended that you use separate source_directory locations for separate steps. If all the scripts in your pipeline steps are in a single directory, the hash of that directory changes every time you make a change to one script forcing all steps to rerun. For an example of using separate directories for different steps, see https://aka.ms/pl-get-started.

  • Maintaining separate folders for scripts and dependent files for each step helps reduce the size of the snapshot created for each step because only the specific folder is snapshotted. Because changes in any files in the step's source_directory trigger a re-upload of the snapshot, maintaining separate folders each step, helps the over reuse of steps in the pipeline because if there are no changes in the source_directory of a step then the step's previous run is reused.

  • If data used in a step is in a datastore and allow_reuse is True, then changes to the data change won't be detected. If the data is uploaded as part of the snapshot (under the step's source_directory), though this is not recommended, then the hash will change and will trigger a rerun.

Methods

create_input_output_bindings(inputs, outputs, default_datastore, resource_inputs=None)

Create input and output bindings from the step inputs and outputs.

create_module_def(execution_type, input_bindings, output_bindings, param_defs=None, create_sequencing_ports=True, allow_reuse=True, version=None, module_type=None)

Create the module definition object that describes the step.

create_node(graph, default_datastore, context)

Create a node for the pipeline graph based on this step.

get_source_directory_and_hash_paths(context, source_directory, script_name, hash_paths)

Get source directory and hash paths for the step.

resolve_input_arguments(arguments, inputs, outputs, params)

Match inputs and outputs to arguments to produce an argument string.

run_after(step)

Run this step after the specified step.

validate_arguments(arguments, inputs, outputs)

Validate that the step inputs and outputs provided in arguments are in the inputs and outputs lists.

create_input_output_bindings(inputs, outputs, default_datastore, resource_inputs=None)

Create input and output bindings from the step inputs and outputs.

create_input_output_bindings(inputs, outputs, default_datastore, resource_inputs=None)

Parameters

inputs
list

The list of step inputs.

outputs
list

The list of step outputs.

default_datastore
AbstractAzureStorageDatastore or AzureDataLakeDatastore

The default datastore.

resource_inputs
list

The list of inputs to be used as resources. Resources are downloaded to the script folder and provide a way to change the behavior of script at run-time.

default value: None

Returns

Tuple of the input bindings and output bindings.

Return type

create_module_def(execution_type, input_bindings, output_bindings, param_defs=None, create_sequencing_ports=True, allow_reuse=True, version=None, module_type=None)

Create the module definition object that describes the step.

create_module_def(execution_type, input_bindings, output_bindings, param_defs=None, create_sequencing_ports=True, allow_reuse=True, version=None, module_type=None)

Parameters

execution_type
str

The execution type of the module.

input_bindings
list

The step input bindings.

output_bindings
list

The step output bindings.

param_defs
list

The step parameter definitions.

default value: None
create_sequencing_ports
bool

Specifies whether sequencing ports will be created for the module.

default value: True
allow_reuse
bool

Specifies whether the module will be available to be reused in future pipelines.

default value: True
version
str

The version of the module.

default value: None
module_type
str

The module type for the module creation service to create. Currently only two types are supported: 'None' and 'BatchInferencing'. module_type is different from execution_type which specifies what kind of backend service to use to run this module.

default value: None

Returns

The module definition object.

Return type

create_node(graph, default_datastore, context)

Create a node for the pipeline graph based on this step.

create_node(graph, default_datastore, context)

Parameters

graph
Graph

The graph to add the node to.

default_datastore
AbstractAzureStorageDatastore or AzureDataLakeDatastore

The default datastore to use for this step.

context
_GraphContext

The graph context object.

Returns

The created node.

Return type

get_source_directory_and_hash_paths(context, source_directory, script_name, hash_paths)

Get source directory and hash paths for the step.

get_source_directory_and_hash_paths(context, source_directory, script_name, hash_paths)

Parameters

context
_GraphContext

The graph context object.

source_directory
str

The source directory for the step.

script_name
str

The script name for the step.

hash_paths
list

The hash paths to use when determining the module fingerprint.

Returns

The source directory and hash paths.

Return type

resolve_input_arguments(arguments, inputs, outputs, params)

Match inputs and outputs to arguments to produce an argument string.

resolve_input_arguments(arguments, inputs, outputs, params)

Parameters

arguments
list

A list of step arguments.

inputs
list

A list of step inputs.

outputs
list

A list of step outputs.

params
list

A list of step parameters.

Returns

The resolved arguments list.

Return type

run_after(step)

Run this step after the specified step.

run_after(step)

Parameters

step
PipelineStep

The pipeline step to run before this step.

Remarks

If you want to run a step, say, step3 after both step1 and step2 are completed, you can use:


   step3.run_after(step1)
   step3.run_after(step2)

validate_arguments(arguments, inputs, outputs)

Validate that the step inputs and outputs provided in arguments are in the inputs and outputs lists.

validate_arguments(arguments, inputs, outputs)

Parameters

arguments
list

The list of step arguments.

inputs
list

The list of step inputs.

outputs
list

The list of step outputs.