ModuleStepBase Class
Adds a step to a pipeline that uses a specific module.
A ModuleStep derives from ModuleStepBase and is a node in a pipeline that uses an existing Module, and specifically, one of its versions. In order to define which ModuleVersion would eventually be used in the submitted pipeline, you can define one of the following when creating the ModuleStep:
- ModuleVersion object
- Module object and a version value
- Only Module without a version value; in this case, the version resolution used may vary across submissions.
You also need to define the mapping between the step's inputs and outputs to the ModuleVersion object's inputs and outputs.
Initialize ModuleStepBase.
- Inheritance
-
ModuleStepBase
Constructor
ModuleStepBase(module=None, version=None, module_version=None, inputs_map=None, outputs_map=None, compute_target=None, runconfig=None, runconfig_pipeline_params=None, arguments=None, params=None, name=None, _workflow_provider=None)
Parameters
- module_version
- ModuleVersion
The ModuleVersion of the step. Either Module of ModuleVersion must be provided.
- inputs_map
- Dict[str, Union[InputPortBinding, DataReference, PortDataReference, PipelineData, Dataset, DatasetDefinition, PipelineDataset]]
A dictionary where keys are names of inputs on the module_version
and values are input
port bindings.
- outputs_map
- Dict[str, Union[OutputPortBinding, DataReference, PortDataReference, PipelineData, Dataset, DatasetDefinition, PipelineDataset]]
A dictionary where keys are names of inputs on the module_version
and values are output
port bindings.
- runconfig_pipeline_params
- Dict[str, PipelineParameter]
Override runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property.
Supported values: 'NodeCount', 'MpiProcessCountPerNode', 'TensorflowWorkerCount', 'TensorflowParameterServerCount'
- arguments
- [str]
Command line arguments for the script file. The arguments will be delivered to compute via arguments in RunConfiguration. For more details on how to handle arguments such as special symbols, please refer arguments in RunConfiguration.
- _workflow_provider
- <xref:azureml.pipeline.core._aeva_provider._AevaWorkflowProvider>
(Internal use only.) The workflow provider.
- module_version
- ModuleVersion
The ModuleVersion of the step. Either Module of ModuleVersion must be provided
- inputs_map
- Dict[str, Union[InputPortBinding, DataReference, PortDataReference, PipelineData, Dataset, DatasetDefinition, PipelineDataset]]
A dictionary where keys are names of inputs on the module_version
and values are input
port bindings.
- outputs_map
- Dict[str, Union[OutputPortBinding, DataReference, PortDataReference, PipelineData, Dataset, DatasetDefinition, PipelineDataset]]
A dictionary where keys are names of inputs on the module_version
and values are output
port bindings.
- compute_target
- <xref:DsvmCompute>, <xref:AmlCompute>, <xref:ComputeInstance>, <xref:RemoteTarget>, <xref:HDIClusterTarget>, str, tuple
Compute target to use. If unspecified, the target from the runconfig will be used. compute_target may be a compute target object or the string name of a compute target on the workspace. Optionally if the compute target is not available at pipeline creation time, you may specify a tuple of ('compute target name', 'compute target type') to avoid fetching the compute target object (AmlCompute type is 'AmlCompute' and RemoteTarget type is 'VirtualMachine')
- runconfig
- RunConfiguration
The RunConfiguration to use, optional. A RunConfiguration can be used to specify additional requirements for the run, such as conda dependencies and a docker image.
- runconfig_pipeline_params
- Dict[str, PipelineParameter]
Override runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property.
Supported values: 'NodeCount', 'MpiProcessCountPerNode', 'TensorflowWorkerCount', 'TensorflowParameterServerCount'
- arguments
- [str]
Command line arguments for the script file. The arguments will be delivered to compute via arguments in RunConfiguration. For more details how to handle arguments such as special symbols, please refer arguments in RunConfiguration
- _workflow_provider
- <xref:azureml.pipeline.core._aeva_provider._AevaWorkflowProvider>
(Internal use only.) The workflow provider.
- name
Methods
create_node |
Create a pipeline graph node. |
create_node
Create a pipeline graph node.
create_node(graph, default_datastore, context)
Parameters
- default_datastore
- AbstractAzureStorageDatastore or AzureDataLakeDatastore
The default datastore to use for this step.
- context
- <xref:azureml.pipeline.core._GraphContext>
(Internal use only.) The graph context object.
Returns
The node object.
Return type
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for