NotebookRunnerStep Class
Creates a step to run a local notebook as a pipeline step in an Azure Machine Learning.
Create an Azure ML Pipeline step to run a local notebook as a step in azure machine learning pipeline.
- Inheritance
-
azureml.pipeline.core._python_script_step_base._PythonScriptStepBaseNotebookRunnerStep
Constructor
NotebookRunnerStep(name=None, notebook_run_config=None, runconfig_pipeline_params=None, compute_target=None, params=None, inputs=None, outputs=None, allow_reuse=True, version=None, output_notebook_pipeline_data_name=None)
Parameters
- notebook_run_config
- NotebookRunConfig
The associated notebook run config object for this step.
- runconfig_pipeline_params
- dict[str, PipelineParameter]
An override of runconfig properties at runtime using key-value pairs, each with name of the runconfig property and PipelineParameter for that property.
- compute_target
- Union[DsvmCompute, AmlCompute, RemoteCompute, str]
[Required] The compute target to use.
- params
- dict
A dictionary of name-value pairs to be accessible in notebook. This is additional to parameters in azureml.contrib.notebook.NotebookRunConfig azureml.pipeline.core.PipelineParameters can be provided here.
- inputs
- list[Union[InputPortBinding, DataReference, PortDataReference, PipelineData, DatasetConsumptionConfig, PipelineOutputFileDataset]]
A list of input port bindings.
- outputs
- list[Union[PipelineData, OutputPortBinding, PipelineOutputFileDataset]]
A list of output port bindings.
- allow_reuse
- bool
Indicates whether the step should reuse previous results when re-run with the same settings. Reuse is enabled by default. If the step contents (scripts/dependencies) as well as inputs and parameters remain unchanged, the output from the previous run of this step is reused. When reusing the step, instead of submitting the job to compute, the results from the previous run are immediately made available to any subsequent steps.
- version
- str
An optional version tag to denote a change in functionality for the module.
- output_notebook_pipeline_data_name
- str
The name of the intermediate pipeline data containing the output notebook (with output of each cell) of the run. Specifying this would produce an output notebook as one of the outputs of the step which can be further passed along in the pipeline.
- notebook_run_config
- NotebookRunConfig
The associated notebook run config object for this step.
- runconfig_pipeline_params
- dict[str, (PipelineParameter]
An override of runconfig properties at runtime using key-value pairs, each with name of the runconfig property and PipelineParameter for that property.
- compute_target
- Union[DsvmCompute, AmlCompute, RemoteCompute, str]
[Required] The compute target to use.
- params
- dict
A dictionary of name-value pairs to be accessible in notebook. This is additional to parameters in azureml.contrib.notebook.NotebookRunConfig azureml.pipeline.core.PipelineParameters can be provided here.
- inputs
- list[Union[InputPortBinding, DataReference, PortDataReference, PipelineData, DatasetConsumptionConfig, PipelineOutputFileDataset]]
A list of input port bindings.
- outputs
- list[Union[PipelineData, OutputPortBinding, PipelineOutputFileDataset]]
A list of output port bindings.
- allow_reuse
- str
Indicates whether the step should reuse previous results when re-run with the same settings. Reuse is enabled by default. If the step contents (scripts/dependencies) as well as inputs and parameters remain unchanged, the output from the previous run of this step is reused. When reusing the step, instead of submitting the job to compute, the results from the previous run are immediately made available to any subsequent steps.
- output_notebook_pipeline_data_name
Name of the intermediate pipeline data containing output notebook (with output of each cell) of the run. Specifying this would produce output notebook as one of the output of the step which can be further passed along in the pipeline.
Methods
create_node |
Create a node for a Python script step. |
create_node
Create a node for a Python script step.
create_node(graph, default_datastore, context)
Parameters
- default_datastore
- AbstractAzureStorageDatastore or AzureDataLakeDatastore
The default datastore.
- context
- <xref:azureml.pipeline.core._GraphContext>
The graph context.
Returns
The created node.
Return type
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for