Convert ML experiments to production Python code

APPLIES TO: Python SDK azureml v1

In this tutorial, you learn how to convert Jupyter notebooks into Python scripts to make it testing and automation friendly using the MLOpsPython code template and Azure Machine Learning. Typically, this process is used to take experimentation / training code from a Jupyter notebook and convert it into Python scripts. Those scripts can then be used testing and CI/CD automation in your production environment.

A machine learning project requires experimentation where hypotheses are tested with agile tools like Jupyter Notebook using real datasets. Once the model is ready for production, the model code should be placed in a production code repository. In some cases, the model code must be converted to Python scripts to be placed in the production code repository. This tutorial covers a recommended approach on how to export experimentation code to Python scripts.

In this tutorial, you learn how to:

  • Clean nonessential code
  • Refactor Jupyter Notebook code into functions
  • Create Python scripts for related tasks
  • Create unit tests

Prerequisites

Remove all nonessential code

Some code written during experimentation is only intended for exploratory purposes. Therefore, the first step to convert experimental code into production code is to remove this nonessential code. Removing nonessential code will also make the code more maintainable. In this section, you'll remove code from the experimentation/Diabetes Ridge Regression Training.ipynb notebook. The statements printing the shape of X and y and the cell calling features.describe are just for data exploration and can be removed. After removing nonessential code, experimentation/Diabetes Ridge Regression Training.ipynb should look like the following code without markdown:

from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
import joblib
import pandas as pd

sample_data = load_diabetes()

df = pd.DataFrame(
    data=sample_data.data,
    columns=sample_data.feature_names)
df['Y'] = sample_data.target

X = df.drop('Y', axis=1).values
y = df['Y'].values

X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=0)
data = {"train": {"X": X_train, "y": y_train},
        "test": {"X": X_test, "y": y_test}}

args = {
    "alpha": 0.5
}

reg_model = Ridge(**args)
reg_model.fit(data["train"]["X"], data["train"]["y"])

preds = reg_model.predict(data["test"]["X"])
mse = mean_squared_error(preds, y_test)
metrics = {"mse": mse}
print(metrics)

model_name = "sklearn_regression_model.pkl"
joblib.dump(value=reg, filename=model_name)

Refactor code into functions

Second, the Jupyter code needs to be refactored into functions. Refactoring code into functions makes unit testing easier and makes the code more maintainable. In this section, you'll refactor:

  • The Diabetes Ridge Regression Training notebook(experimentation/Diabetes Ridge Regression Training.ipynb)
  • The Diabetes Ridge Regression Scoring notebook(experimentation/Diabetes Ridge Regression Scoring.ipynb)

Refactor Diabetes Ridge Regression Training notebook into functions

In experimentation/Diabetes Ridge Regression Training.ipynb, complete the following steps:

  1. Create a function called split_data to split the data frame into test and train data. The function should take the dataframe df as a parameter, and return a dictionary containing the keys train and test.

    Move the code under the Split Data into Training and Validation Sets heading into the split_data function and modify it to return the data object.

  2. Create a function called train_model, which takes the parameters data and args and returns a trained model.

    Move the code under the heading Training Model on Training Set into the train_model function and modify it to return the reg_model object. Remove the args dictionary, the values will come from the args parameter.

  3. Create a function called get_model_metrics, which takes parameters reg_model and data, and evaluates the model then returns a dictionary of metrics for the trained model.

    Move the code under the Validate Model on Validation Set heading into the get_model_metrics function and modify it to return the metrics object.

The three functions should be as follows:

# Split the dataframe into test and train data
def split_data(df):
    X = df.drop('Y', axis=1).values
    y = df['Y'].values

    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=0)
    data = {"train": {"X": X_train, "y": y_train},
            "test": {"X": X_test, "y": y_test}}
    return data


# Train the model, return the model
def train_model(data, args):
    reg_model = Ridge(**args)
    reg_model.fit(data["train"]["X"], data["train"]["y"])
    return reg_model


# Evaluate the metrics for the model
def get_model_metrics(reg_model, data):
    preds = reg_model.predict(data["test"]["X"])
    mse = mean_squared_error(preds, data["test"]["y"])
    metrics = {"mse": mse}
    return metrics

Still in experimentation/Diabetes Ridge Regression Training.ipynb, complete the following steps:

  1. Create a new function called main, which takes no parameters and returns nothing.

  2. Move the code under the "Load Data" heading into the main function.

  3. Add invocations for the newly written functions into the main function:

    # Split Data into Training and Validation Sets
    data = split_data(df)
    
    # Train Model on Training Set
    args = {
        "alpha": 0.5
    }
    reg = train_model(data, args)
    
    # Validate Model on Validation Set
    metrics = get_model_metrics(reg, data)
    
  4. Move the code under the "Save Model" heading into the main function.

The main function should look like the following code:

def main():
    # Load Data
    sample_data = load_diabetes()

    df = pd.DataFrame(
        data=sample_data.data,
        columns=sample_data.feature_names)
    df['Y'] = sample_data.target

    # Split Data into Training and Validation Sets
    data = split_data(df)

    # Train Model on Training Set
    args = {
        "alpha": 0.5
    }
    reg = train_model(data, args)

    # Validate Model on Validation Set
    metrics = get_model_metrics(reg, data)

    # Save Model
    model_name = "sklearn_regression_model.pkl"

    joblib.dump(value=reg, filename=model_name)

At this stage, there should be no code remaining in the notebook that isn't in a function, other than import statements in the first cell.

Add a statement that calls the main function.

main()

After refactoring, experimentation/Diabetes Ridge Regression Training.ipynb should look like the following code without the markdown:

from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
import pandas as pd
import joblib


# Split the dataframe into test and train data
def split_data(df):
    X = df.drop('Y', axis=1).values
    y = df['Y'].values

    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=0)
    data = {"train": {"X": X_train, "y": y_train},
            "test": {"X": X_test, "y": y_test}}
    return data


# Train the model, return the model
def train_model(data, args):
    reg_model = Ridge(**args)
    reg_model.fit(data["train"]["X"], data["train"]["y"])
    return reg_model


# Evaluate the metrics for the model
def get_model_metrics(reg_model, data):
    preds = reg_model.predict(data["test"]["X"])
    mse = mean_squared_error(preds, data["test"]["y"])
    metrics = {"mse": mse}
    return metrics


def main():
    # Load Data
    sample_data = load_diabetes()

    df = pd.DataFrame(
        data=sample_data.data,
        columns=sample_data.feature_names)
    df['Y'] = sample_data.target

    # Split Data into Training and Validation Sets
    data = split_data(df)

    # Train Model on Training Set
    args = {
        "alpha": 0.5
    }
    reg = train_model(data, args)

    # Validate Model on Validation Set
    metrics = get_model_metrics(reg, data)

    # Save Model
    model_name = "sklearn_regression_model.pkl"

    joblib.dump(value=reg, filename=model_name)

main()

Refactor Diabetes Ridge Regression Scoring notebook into functions

In experimentation/Diabetes Ridge Regression Scoring.ipynb, complete the following steps:

  1. Create a new function called init, which takes no parameters and return nothing.
  2. Copy the code under the "Load Model" heading into the init function.

The init function should look like the following code:

def init():
    model_path = Model.get_model_path(
        model_name="sklearn_regression_model.pkl")
    model = joblib.load(model_path)

Once the init function has been created, replace all the code under the heading "Load Model" with a single call to init as follows:

init()

In experimentation/Diabetes Ridge Regression Scoring.ipynb, complete the following steps:

  1. Create a new function called run, which takes raw_data and request_headers as parameters and returns a dictionary of results as follows:

    {"result": result.tolist()}
    
  2. Copy the code under the "Prepare Data" and "Score Data" headings into the run function.

    The run function should look like the following code (Remember to remove the statements that set the variables raw_data and request_headers, which will be used later when the run function is called):

    def run(raw_data, request_headers):
        data = json.loads(raw_data)["data"]
        data = numpy.array(data)
        result = model.predict(data)
    
        return {"result": result.tolist()}
    

Once the run function has been created, replace all the code under the "Prepare Data" and "Score Data" headings with the following code:

raw_data = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
request_header = {}
prediction = run(raw_data, request_header)
print("Test result: ", prediction)

The previous code sets variables raw_data and request_header, calls the run function with raw_data and request_header, and prints the predictions.

After refactoring, experimentation/Diabetes Ridge Regression Scoring.ipynb should look like the following code without the markdown:

import json
import numpy
from azureml.core.model import Model
import joblib

def init():
    model_path = Model.get_model_path(
        model_name="sklearn_regression_model.pkl")
    model = joblib.load(model_path)

def run(raw_data, request_headers):
    data = json.loads(raw_data)["data"]
    data = numpy.array(data)
    result = model.predict(data)

    return {"result": result.tolist()}

init()
test_row = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
request_header = {}
prediction = run(test_row, {})
print("Test result: ", prediction)

Third, related functions need to be merged into Python files to better help code reuse. In this section, you'll be creating Python files for the following notebooks:

  • The Diabetes Ridge Regression Training notebook(experimentation/Diabetes Ridge Regression Training.ipynb)
  • The Diabetes Ridge Regression Scoring notebook(experimentation/Diabetes Ridge Regression Scoring.ipynb)

Create Python file for the Diabetes Ridge Regression Training notebook

Convert your notebook to an executable script by running the following statement in a command prompt, which uses the nbconvert package and the path of experimentation/Diabetes Ridge Regression Training.ipynb:

jupyter nbconvert "Diabetes Ridge Regression Training.ipynb" --to script --output train

Once the notebook has been converted to train.py, remove any unwanted comments. Replace the call to main() at the end of the file with a conditional invocation like the following code:

if __name__ == '__main__':
    main()

Your train.py file should look like the following code:

from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
import pandas as pd
import joblib


# Split the dataframe into test and train data
def split_data(df):
    X = df.drop('Y', axis=1).values
    y = df['Y'].values

    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=0)
    data = {"train": {"X": X_train, "y": y_train},
            "test": {"X": X_test, "y": y_test}}
    return data


# Train the model, return the model
def train_model(data, args):
    reg_model = Ridge(**args)
    reg_model.fit(data["train"]["X"], data["train"]["y"])
    return reg_model


# Evaluate the metrics for the model
def get_model_metrics(reg_model, data):
    preds = reg_model.predict(data["test"]["X"])
    mse = mean_squared_error(preds, data["test"]["y"])
    metrics = {"mse": mse}
    return metrics


def main():
    # Load Data
    sample_data = load_diabetes()

    df = pd.DataFrame(
        data=sample_data.data,
        columns=sample_data.feature_names)
    df['Y'] = sample_data.target

    # Split Data into Training and Validation Sets
    data = split_data(df)

    # Train Model on Training Set
    args = {
        "alpha": 0.5
    }
    reg = train_model(data, args)

    # Validate Model on Validation Set
    metrics = get_model_metrics(reg, data)

    # Save Model
    model_name = "sklearn_regression_model.pkl"

    joblib.dump(value=reg, filename=model_name)

if __name__ == '__main__':
    main()

train.py can now be invoked from a terminal by running python train.py. The functions from train.py can also be called from other files.

The train_aml.py file found in the diabetes_regression/training directory in the MLOpsPython repository calls the functions defined in train.py in the context of an Azure Machine Learning experiment job. The functions can also be called in unit tests, covered later in this guide.

Create Python file for the Diabetes Ridge Regression Scoring notebook

Convert your notebook to an executable script by running the following statement in a command prompt that which uses the nbconvert package and the path of experimentation/Diabetes Ridge Regression Scoring.ipynb:

jupyter nbconvert "Diabetes Ridge Regression Scoring.ipynb" --to script --output score

Once the notebook has been converted to score.py, remove any unwanted comments. Your score.py file should look like the following code:

import json
import numpy
from azureml.core.model import Model
import joblib

def init():
    model_path = Model.get_model_path(
        model_name="sklearn_regression_model.pkl")
    model = joblib.load(model_path)

def run(raw_data, request_headers):
    data = json.loads(raw_data)["data"]
    data = numpy.array(data)
    result = model.predict(data)

    return {"result": result.tolist()}

init()
test_row = '{"data":[[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]]}'
request_header = {}
prediction = run(test_row, request_header)
print("Test result: ", prediction)

The model variable needs to be global so that it's visible throughout the script. Add the following statement at the beginning of the init function:

global model

After adding the previous statement, the init function should look like the following code:

def init():
    global model

    # load the model from file into a global object
    model_path = Model.get_model_path(
        model_name="sklearn_regression_model.pkl")
    model = joblib.load(model_path)

Create unit tests for each Python file

Fourth, create unit tests for your Python functions. Unit tests protect code against functional regressions and make it easier to maintain. In this section, you'll be creating unit tests for the functions in train.py.

train.py contains multiple functions, but we'll only create a single unit test for the train_model function using the Pytest framework in this tutorial. Pytest isn't the only Python unit testing framework, but it's one of the most commonly used. For more information, visit Pytest.

A unit test usually contains three main actions:

  • Arrange object - creating and setting up necessary objects
  • Act on an object
  • Assert what is expected

The unit test will call train_model with some hard-coded data and arguments, and validate that train_model acted as expected by using the resulting trained model to make a prediction and comparing that prediction to an expected value.

import numpy as np
from code.training.train import train_model


def test_train_model():
    # Arrange
    X_train = np.array([1, 2, 3, 4, 5, 6]).reshape(-1, 1)
    y_train = np.array([10, 9, 8, 8, 6, 5])
    data = {"train": {"X": X_train, "y": y_train}}

    # Act
    reg_model = train_model(data, {"alpha": 1.2})

    # Assert
    preds = reg_model.predict([[1], [2]])
    np.testing.assert_almost_equal(preds, [9.93939393939394, 9.03030303030303])

Next steps

Now that you understand how to convert from an experiment to production code, see the following links for more information and next steps: