教學課程:使用 Azure Machine Learning,搭配 MNIST 資料和 scikit-learn 定型映像分類模型Tutorial: Train image classification models with MNIST data and scikit-learn using Azure Machine Learning

在本教學課程中,您會以遠端計算資源訓練機器學習模型。In this tutorial, you train a machine learning model on remote compute resources. 您將使用 Python Jupyter Notebook 中的 Azure Machine Learning 服務 (預覽) 定型和部署工作流程。You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. 然後,您可以使用 Notebook 作為範本,以自己的資料將您自己的機器學習服務模型定型。You can then use the notebook as a template to train your own machine learning model with your own data. 本教學課程是兩部分教學課程系列的第一部分This tutorial is part one of a two-part tutorial series.

本教學課程會搭配 Azure Machine Learning 服務使用 MNIST 資料集和 scikit-learn 來進行簡單的羅吉斯迴歸定型。This tutorial trains a simple logistic regression by using the MNIST dataset and scikit-learn with Azure Machine Learning service. MNIST 是熱門的資料集,由 70,000 個灰階影像所組成。MNIST is a popular dataset consisting of 70,000 grayscale images. 每個影像都是 28 x 28 像素的手寫數字,代表 0 到 9 的數字。Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. 目標是要建立多類別分類器,以識別特定影像所代表的數字。The goal is to create a multiclass classifier to identify the digit a given image represents.

了解如何執行下列動作:Learn how to take the following actions:

  • 設定您的開發環境。Set up your development environment.
  • 存取及檢查資料。Access and examine the data.
  • 在遠端叢集上定型簡單的羅吉斯迴歸模型。Train a simple logistic regression model on a remote cluster.
  • 檢閱定型結果並註冊最佳模型。Review training results and register the best model.

您會在本教學課程的第二部分中,了解如何選取模型及部署模型。You learn how to select a model and deploy it in part two of this tutorial.

如果您沒有 Azure 訂用帳戶,請在開始前先建立一個免費帳戶。If you don’t have an Azure subscription, create a free account before you begin. 立即試用免費或付費版本的 Azure Machine Learning 服務Try the free or paid version of Azure Machine Learning service today.


本文中的程式碼已進行過 Azure Machine Learning SDK 1.0.41 版的測試。Code in this article was tested with Azure Machine Learning SDK version 1.0.41.


請跳至設定您的開發環境閱讀完整的 Notebook 步驟,或依照下列指示取得 Notebook,並在 Azure Notebooks 或您自己的 Notebook 伺服器上加以執行。Skip to Set up your development environment to read through the notebook steps, or use the instructions below to get the notebook and run it on Azure Notebooks or your own notebook server. 若要執行 Notebook,您將需要:To run the notebook you will need:

  • 已安裝下列項目的 Python 3.6 Notebook 伺服器:A Python 3.6 notebook server with the following installed:
    • Azure Machine Learning SDK for PythonThe Azure Machine Learning SDK for Python
    • matplotlibscikit-learnmatplotlib and scikit-learn
  • 教學課程 Notebook 和 utils.py 檔案The tutorial notebook and the file utils.py
  • 機器學習工作區A machine learning workspace
  • 與 Notebook 位於相同目錄中的工作區組態檔The configuration file for the workspace in the same directory as the notebook

請從以下各節取得前述所有必要項目。Get all these prerequisites from either of the sections below.

使用您工作區中的雲端 Notebook 伺服器Use a cloud notebook server in your workspace

您可以輕鬆地開始使用自己的雲端式 Notebook 伺服器。It's easy to get started with your own cloud-based notebook server. 我們已在您建立此雲端資源後,為您安裝及設定適用於 Python 的 Azure Machine Learning SDKThe Azure Machine Learning SDK for Python is already installed and configured for you once you create this cloud resource.

  • 啟動 Notebook 網頁之後,請開啟 tutorials/img-classification-part1-training.ipynb Notebook。After you launch the notebook webpage, open the tutorials/img-classification-part1-training.ipynb notebook.

使用您自己的 Jupyter Notebook 伺服器Use your own Jupyter notebook server

  1. 依照建立 Azure Machine Learning 服務工作區中的指示執行下列工作:Use the instructions at Create an Azure Machine Learning service workspace to do the following:

    • 建立 Miniconda 環境Create a Miniconda environment
    • 安裝適用於 Python 的 Azure Machine Learning SDKInstall the Azure Machine Learning SDK for Python
    • 建立工作區Create a workspace
    • 撰寫工作區設定檔 (aml_config/config.json)。Write a workspace configuration file (aml_config/config.json).
  2. 複製 GitHub 存放庫Clone the GitHub repository.

    git clone https://github.com/Azure/MachineLearningNotebooks.git
  3. 從複製的目錄中啟動 Notebook 伺服器。Start the notebook server from your cloned directory.

    jupyter notebook

完成這些步驟之後,請從您複製的目錄執行 tutorials/img-classification-part1-training.ipynb Notebook。After you complete the steps, run the tutorials/img-classification-part1-training.ipynb notebook from your cloned directory.

設定您的開發環境Set up your development environment

針對您開發工作的所有設定都可以在 Python Notebook 中完成。All the setup for your development work can be accomplished in a Python notebook. 設定包含下列動作:Setup includes the following actions:

  • 匯入 Python 套件。Import Python packages.
  • 連線到工作區,以便讓本機電腦能夠與遠端資源進行通訊。Connect to a workspace, so that your local computer can communicate with remote resources.
  • 建立實驗來追蹤所有執行。Create an experiment to track all your runs.
  • 建立要用於定型的遠端計算目標。Create a remote compute target to use for training.

匯入套件Import packages

匯入在本工作階段所需的 Python 套件。Import Python packages you need in this session. 此外,也顯示 Azure Machine Learning SDK 版本:Also display the Azure Machine Learning SDK version:

%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt

import azureml.core
from azureml.core import Workspace

# check core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)

連線到工作區Connect to a workspace

從現有的工作區建立工作區物件。Create a workspace object from the existing workspace. Workspace.from_config() 會讀取 config.json 檔案,並將詳細資料載入到名為 ws 的物件:Workspace.from_config() reads the file config.json and loads the details into an object named ws:

# load workspace configuration from the config.json file in the current folder.
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, ws.location, sep = '\t')

建立實驗Create an experiment

建立實驗,以追蹤您工作區中的執行。Create an experiment to track the runs in your workspace. 一個工作區可以有多個實驗:A workspace can have multiple experiments:

experiment_name = 'sklearn-mnist'

from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)

建立或連結現有的計算資源Create or attach an existing compute resource

資料科學家可藉由使用 Azure Machine Learning Compute 這項受控服務,在 Azure 虛擬機器的叢集上訓練機器學習模型。By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. 範例包括具有 GPU 支援的 VM。Examples include VMs with GPU support. 在本教學課程中,您會建立 Azure Machine Learning Compute 作為訓練環境。In this tutorial, you create Azure Machine Learning Compute as your training environment. 如果您的工作區中還沒有計算叢集,下列程式碼將會為您建立計算叢集。The code below creates the compute clusters for you if they don't already exist in your workspace.

建立計算需要大約 5 分鐘的時間。Creation of the compute takes about five minutes. 如果工作區中已經有計算,則程式碼會加以使用而略過建立程序。If the compute is already in the workspace, the code uses it and skips the creation process.

from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os

# choose a name for your cluster
compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "cpucluster")
compute_min_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MIN_NODES", 0)
compute_max_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MAX_NODES", 4)

# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get("AML_COMPUTE_CLUSTER_SKU", "STANDARD_D2_V2")

if compute_name in ws.compute_targets:
    compute_target = ws.compute_targets[compute_name]
    if compute_target and type(compute_target) is AmlCompute:
        print('found compute target. just use it. ' + compute_name)
    print('creating a new compute target...')
    provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size,
                                                                min_nodes = compute_min_nodes,
                                                                max_nodes = compute_max_nodes)

    # create the cluster
    compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)

    # can poll for a minimum number of nodes and for a specific timeout.
    # if no min node count is provided it will use the scale settings for the cluster
    compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)

     # For a more detailed view of current AmlCompute status, use get_status()

您現在具有必要的套件和計算資源,可以在雲端將模型定型。You now have the necessary packages and compute resources to train a model in the cloud.

探索資料Explore data

在您將模型定型之前,必須先了解用來將模型定型的資料。Before you train a model, you need to understand the data that you use to train it. 此外,您也需要將資料複製到雲端。You also need to copy the data into the cloud. 如此,您的雲端定型環境便可以存取該資料。Then it can be accessed by your cloud training environment. 在本節中,您會了解如何執行下列動作:In this section, you learn how to take the following actions:

  • 下載 MNIST 資料集。Download the MNIST dataset.
  • 顯示一些範例影像。Display some sample images.
  • 將資料上傳至雲端。Upload data to the cloud.

下載 MNIST 資料集Download the MNIST dataset

下載 MNIST 資料集,並將檔案儲存到 data 本機目錄。Download the MNIST dataset and save the files into a data directory locally. 這會同時下載定型和測試用的影像與標籤:Images and labels for both training and testing are downloaded:

import urllib.request
import os

data_folder = os.path.join(os.getcwd(), 'data')
os.makedirs(data_folder, exist_ok = True)

urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'train-images.gz'))
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'train-labels.gz'))
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz'))
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz'))

您將看到如下的輸出:('./data/test-labels.gz', <http.client.HTTPMessage at 0x7f40864c77b8>)You will see output similar to this: ('./data/test-labels.gz', <http.client.HTTPMessage at 0x7f40864c77b8>)

顯示一些範例影像Display some sample images

將壓縮的檔案載入到 numpy 陣列。Load the compressed files into numpy arrays. 然後使用 matplotlib 來繪製 30 個來自資料集的隨機影像,並且在其上加上標籤。Then use matplotlib to plot 30 random images from the dataset with their labels above them. 此步驟需要 util.py 檔案中所包含的 load_data 函式。This step requires a load_data function that's included in an util.py file. 這個檔案包含在範例資料夾。This file is included in the sample folder. 請確定將它放在與此 Notebook 相同的資料夾中。Make sure it's placed in the same folder as this notebook. load_data 函式會直接將壓縮檔案剖析為 numpy 陣列:The load_data function simply parses the compressed files into numpy arrays:

# make sure utils.py is in the same directory as this code
from utils import load_data

# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)

# now let's show some randomly chosen images from the traininng set.
count = 0
sample_size = 30
plt.figure(figsize = (16, 6))
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
    count = count + 1
    plt.subplot(1, sample_size, count)
    plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
    plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)

影像的隨機範例會顯示:A random sample of images displays:


現在您已了解這些影像外觀與預期的預測結果。Now you have an idea of what these images look like and the expected prediction outcome.

將資料上傳至雲端Upload data to the cloud

現在,請將資料從本機電腦上傳至 Azure,以讓資料可供遠端存取。Now make the data accessible remotely by uploading that data from your local machine into Azure. 如此便可存取該資料來進行遠端定型。Then it can be accessed for remote training. 資料存放區是與您工作區相關的便利建構,可供您上傳/下載資料。The datastore is a convenient construct associated with your workspace for you to upload or download data. 您也可以從遠端計算目標與其進行互動。You can also interact with it from your remote compute targets. 它受到 Azure Blob 儲存體帳戶支援。It's backed by an Azure Blob storage account.

MNIST 檔案會上傳到資料存放區根目錄中名為 mnist 的目錄:The MNIST files are uploaded into a directory named mnist at the root of the datastore:

ds = ws.get_default_datastore()
print(ds.datastore_type, ds.account_name, ds.container_name)

ds.upload(src_dir=data_folder, target_path='mnist', overwrite=True, show_progress=True)

您現在已經具備開始將模型定型所需的一切。You now have everything you need to start training a model.

在遠端叢集上定型Train on a remote cluster

針對這項工作,將工作提交至您稍早設定的遠端定型叢集。For this task, submit the job to the remote training cluster you set up earlier. 若要提交工作,您要:To submit a job you:

  • 建立目錄Create a directory
  • 建立定型指令碼Create a training script
  • 建立 estimator 物件Create an estimator object
  • 提交工作Submit the job

建立目錄Create a directory

建立目錄,以從您的電腦傳遞必要程式碼到遠端資源。Create a directory to deliver the necessary code from your computer to the remote resource.

import os
script_folder  = os.path.join(os.getcwd(), "sklearn-mnist")
os.makedirs(script_folder, exist_ok=True)

建立定型指令碼Create a training script

若要將工作提交到叢集,請先建立定型指令碼。To submit the job to the cluster, first create a training script. 執行下列程式碼,在您剛建立的目錄中建立定型指令碼,稱為 train.pyRun the following code to create the training script called train.py in the directory you just created.

%%writefile $script_folder/train.py

import argparse
import os
import numpy as np

from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib

from azureml.core import Run
from utils import load_data

# let user feed in 2 parameters, the location of the data files (from datastore), and the regularization rate of the logistic regression model
parser = argparse.ArgumentParser()
parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')
parser.add_argument('--regularization', type=float, dest='reg', default=0.01, help='regularization rate')
args = parser.parse_args()

data_folder = args.data_folder
print('Data folder:', data_folder)

# load train and test set into numpy arrays
# note we scale the pixel intensity values to 0-1 (by dividing it with 255.0) so the model can converge faster.
X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep = '\n')

# get hold of the current run
run = Run.get_context()

print('Train a logistic regression model with regularization rate of', args.reg)
clf = LogisticRegression(C=1.0/args.reg, solver="liblinear", multi_class="auto", random_state=42)
clf.fit(X_train, y_train)

print('Predict the test set')
y_hat = clf.predict(X_test)

# calculate accuracy on the prediction
acc = np.average(y_hat == y_test)
print('Accuracy is', acc)

run.log('regularization rate', np.float(args.reg))
run.log('accuracy', np.float(acc))

os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')

請注意指令碼取得資料並儲存模型的方式:Notice how the script gets data and saves models:

  • 定型指令碼會讀取引數以尋找包含資料的目錄。The training script reads an argument to find the directory that contains the data. 當您稍後提交工作時,您會指向這個引數的資料存放區:parser.add_argument('--data-folder', type=str, dest='data_folder', help='data directory mounting point')When you submit the job later, you point to the datastore for this argument: parser.add_argument('--data-folder', type=str, dest='data_folder', help='data directory mounting point')

  • 定型指令碼會將模型儲存到名為 outputs 的目錄中。The training script saves your model into a directory named outputs. 寫入此目錄之任何項目會自動上傳到您的工作區。Anything written in this directory is automatically uploaded into your workspace. 稍後在本教學課程中,您會從這個目錄存取您的模型。You access your model from this directory later in the tutorial. joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')

  • 定型指令碼需要 utils.py 檔案才能正確載入資料集。The training script requires the file utils.py to load the dataset correctly. 下列程式碼會將 utils.py 複製到 script_folder 中,使該檔案可與遠端資源上的定型指令碼一起存取。The following code copies utils.py into script_folder so that the file can be accessed along with the training script on the remote resource.

    import shutil
    shutil.copy('utils.py', script_folder)

建立估計工具Create an estimator

SKLearn 估計工具物件用來提交執行。An SKLearn estimator object is used to submit the run. 請執行下列程式碼來定義下列項目,以建立您的估算器:Create your estimator by running the following code to define these items:

  • 估算器物件的名稱 (est)。The name of the estimator object, est.
  • 包含指令碼的目錄。The directory that contains your scripts. 在此目錄中的所有檔案都會上傳到叢集節點以便執行。All the files in this directory are uploaded into the cluster nodes for execution.
  • 計算目標。The compute target. 在此案例中,您會使用您所建立的 Azure Machine Learning 計算叢集。In this case, you use the Azure Machine Learning compute cluster you created.
  • 定型指令碼名稱 (train.py)。The training script name, train.py.
  • 來自定型指令碼的必要參數。Parameters required from the training script.

在本教學課程中,這個目標會是 AmlCompute。In this tutorial, this target is AmlCompute. 指令碼資料夾中的所有檔案都會上傳到叢集節點以便執行。All files in the script folder are uploaded into the cluster nodes for run. data_folder 會設定為使用資料存放區 ds.path('mnist').as_mount()The data_folder is set to use the datastore, ds.path('mnist').as_mount():

from azureml.train.sklearn import SKLearn

script_params = {
    '--data-folder': ds.path('mnist').as_mount(),
    '--regularization': 0.5

est = SKLearn(source_directory=script_folder,

將工作提交至叢集Submit the job to the cluster

提交估算器物件以執行實驗:Run the experiment by submitting the estimator object:

run = exp.submit(config=est)

由於呼叫是非同步的,因此只要工作一啟動,它就會傳回 PreparingRunning 狀態。Because the call is asynchronous, it returns a Preparing or Running state as soon as the job is started.

監視遠端執行Monitor a remote run

第一次執行總計會花費大約 10 分鐘的時間In total, the first run takes about 10 minutes. 但針對後續的執行,只要指令碼相依性不變,便會重複使用相同的影像。But for subsequent runs, as long as the script dependencies don't change, the same image is reused. 因此容器啟動時間會快許多。So the container startup time is much faster.

在您等候時,會發生什麼事:What happens while you wait:

  • 影像建立:系統會建立一個符合估算器所指定 Python 環境的 Docker 映像。Image creation: A Docker image is created that matches the Python environment specified by the estimator. 影像會上傳到工作區。The image is uploaded to the workspace. 映像的建立和上傳會花費大約 5 分鐘的時間Image creation and uploading takes about five minutes.

    這個階段會針對每個 Python 環境發生一次,因為系統會快取容器以供後續執行使用。This stage happens once for each Python environment because the container is cached for subsequent runs. 在影像建立期間,會將記錄串流至執行歷程記錄中。During image creation, logs are streamed to the run history. 您可以使用這些記錄來監視映像建立程序。You can monitor the image creation progress by using these logs.

  • 調整:如果遠端叢集需要比目前可用節點數更多的節點來進行執行,將會自動新增額外的節點。Scaling: If the remote cluster requires more nodes to do the run than currently available, additional nodes are added automatically. 調整通常會花費大約 5 分鐘的時間Scaling typically takes about five minutes.

  • Running:在這個階段,會將必要的指令碼與檔案傳送到計算目標。Running: In this stage, the necessary scripts and files are sent to the compute target. 然後會掛接或複製資料存放區。Then datastores are mounted or copied. 接著會執行 entry_scriptAnd then the entry_script is run. 當作業執行時,stdout./logs 目錄會串流至執行歷程記錄。While the job is running, stdout and the ./logs directory are streamed to the run history. 您可以使用這些記錄來監視執行的進度。You can monitor the run's progress by using these logs.

  • 後處理:執行的 ./outputs 目錄會複製到您工作區中的執行歷程記錄,以便您存取這些結果。Post-processing: The ./outputs directory of the run is copied over to the run history in your workspace, so you can access these results.

您可以透過數種方式檢查執行中作業的進度。You can check the progress of a running job in several ways. 本教學課程會使用 Jupyter 小工具和 wait_for_completion 方法。This tutorial uses a Jupyter widget and a wait_for_completion method.

Jupyter 小工具Jupyter widget

使用 Jupyter 小工具觀看執行的進度。Watch the progress of the run with a Jupyter widget. 與提交執行相同,小工具會以非同步方式作業,並且會每隔 10 到 15 秒提供即時更新,直到作業完成為止:Like the run submission, the widget is asynchronous and provides live updates every 10 to 15 seconds until the job finishes:

from azureml.widgets import RunDetails

在訓練結束時,小工具看起來會如下所示:The widget will look like the following at the end of training:

Notebook 小工具

如需取消執行,您可以依照這些指示操作。If you need to cancel a run, you can follow these instructions.

在完成時取得記錄檔結果Get log results upon completion

模型定型和監視會在背景中發生。Model training and monitoring happen in the background. 請等到模型完成定型之後,再執行更多程式碼。Wait until the model has finished training before you run more code. 使用 wait_for_completion 可顯示模型定型何時完成:Use wait_for_completion to show when the model training is finished:

run.wait_for_completion(show_output=False) # specify True for a verbose log

顯示執行結果Display run results

您現在在遠端叢集上已有定型的模型。You now have a model trained on a remote cluster. 擷取模型的精確度:Retrieve the accuracy of the model:


輸出顯示遠端模型有 0.9204 的精確度:The output shows the remote model has accuracy of 0.9204:

{'regularization rate': 0.8, 'accuracy': 0.9204}

在下一個教學課程中,您將更詳細地探索這個模型。In the next tutorial, you explore this model in more detail.

註冊模型Register model

定型指令碼的最後一個步驟會在工作執行所在叢集之 VM 中名為 outputs 的目錄內,寫入 outputs/sklearn_mnist_model.pkl 檔案。The last step in the training script wrote the file outputs/sklearn_mnist_model.pkl in a directory named outputs in the VM of the cluster where the job is run. outputs 目錄的特殊之處,在於此目錄中的所有內容都會自動上傳到您的工作區。outputs is a special directory in that all content in this directory is automatically uploaded to your workspace. 此內容會出現在您工作區下,實驗的執行記錄中。This content appears in the run record in the experiment under your workspace. 因此,模型檔案現在也在您的工作區中可供使用。So the model file is now also available in your workspace.

您可以看到與該執行相關聯的檔案:You can see files associated with that run:


請在工作區中註冊模型,以便您或其他共同作業者稍後可以查詢、檢查和部署此模型:Register the model in the workspace, so that you or other collaborators can later query, examine, and deploy this model:

# register model
model = run.register_model(model_name='sklearn_mnist', model_path='outputs/sklearn_mnist_model.pkl')
print(model.name, model.id, model.version, sep = '\t')

清除資源Clean up resources


您所建立的資源可用來作為其他 Azure Machine Learning 服務教學課程和操作說明文章的先決條件。The resources you created can be used as prerequisites to other Azure Machine Learning service tutorials and how-to articles.

如果您不打算使用您建立的資源,請刪除它們,以免產生任何費用:If you don't plan to use the resources you created, delete them, so you don't incur any charges:

  1. 在 Azure 入口網站中,選取最左邊的 [資源群組] 。In the Azure portal, select Resource groups on the far left.

    在 Azure 入口網站中刪除

  2. 在清單中,選取您所建立的資源群組。From the list, select the resource group you created.

  3. 選取 [刪除資源群組] 。Select Delete resource group.

  4. 輸入資源群組名稱。Enter the resource group name. 然後選取 [刪除] 。Then select Delete.

您也可以只刪除 Azure Machine Learning Compute 叢集。You can also delete just the Azure Machine Learning Compute cluster. 不過,自動調整已開啟,且叢集最小值為零。However, autoscale is turned on, and the cluster minimum is zero. 因此,這個特定資源在不處於使用中狀態時,並不會產生額外的計算費用:So this particular resource won't incur additional compute charges when not in use:

# optionally, delete the Azure Machine Learning Compute cluster

後續步驟Next steps

在這個 Azure Machine Learning 服務教學課程中,您已使用 Python 來進行下列工作:In this Azure Machine Learning service tutorial, you used Python for the following tasks:

  • 設定您的開發環境。Set up your development environment.
  • 存取及檢查資料。Access and examine the data.
  • 使用熱門的 scikit-learn 機器學習程式庫對遠端叢集訓練多個模型Train multiple models on a remote cluster using the popular scikit-learn machine learning library
  • 檢閱定型詳細資料並註冊最佳的模型。Review training details and register the best model.

您已準備好使用教學課程系列下一個部分中的指示,來部署這個已註冊的模型:You're ready to deploy this registered model by using the instructions in the next part of the tutorial series: