Azure Databricks 上提供服務的 MLflow 模型 MLflow Model Serving on Azure Databricks

重要

這項功能處於公開預覽狀態This feature is in Public Preview.

MLflow 模型服務可讓您將機器學習模型從模型登錄裝載為 REST 端點,這些端點會根據模型版本和其階段的可用性自動更新。MLflow Model Serving allows you to host machine learning models from Model Registry as REST endpoints that are updated automatically based on the availability of model versions and their stages.

當您針對指定的註冊模型啟用模型服務時,Azure Databricks 會自動為模型建立唯一的單一節點叢集,並在該叢集上部署模型的所有非封存版本。When you enable model serving for a given registered model, Azure Databricks automatically creates a unique single-node cluster for the model and deploys all non-archived versions of the model on that cluster. 如果發生任何錯誤,Azure Databricks 會重新開機叢集,並在您停用模型的模型服務時終止叢集。Azure Databricks restarts the cluster if any error occurs, and terminates the cluster when you disable model serving for the model. 模型服務會自動與模型登錄同步,並部署任何新的已註冊模型版本。Model serving automatically syncs with Model Registry and deploys any new registered model versions. 您可以使用標準 REST API 要求來查詢已部署的模型版本。Deployed model versions can be queried with standard REST API request. Azure Databricks 使用其標準驗證來驗證模型的要求。Azure Databricks authenticates requests to the model using its standard authentication.

雖然這項服務處於預覽狀態,但 Databricks 建議其用於低輸送量和非關鍵性應用程式。While this service is in preview, Databricks recommends its use for low throughput and non-critical applications. 目標輸送量是 20 qps,目標可用性是99.5%,但不保證任何一項。Target throughput is 20 qps and target availability is 99.5%, although no guarantee is made as to either. 此外,每個要求的承載大小限制為 16 MB。Additionally, there is a payload size limit of 16 MB per request.

每個模型版本都會使用 MLflow 模型部署 進行部署,並在其相依性所指定的 Conda 環境中執行。Each model version is deployed using MLflow model deployment and runs in a Conda environment specified by its dependencies.

注意

只要已啟用服務,就會維護叢集,即使沒有任何使用中的模型版本存在也一樣。The cluster is maintained as long as serving is enabled, even if no active model version exists. 若要終止服務叢集,請停用已註冊模型的模型服務。To terminate the serving cluster, disable model serving for the registered model.

需求Requirements

MLflow 模型服務適用於 Python MLflow 模型。MLflow Model Serving is available for Python MLflow models. 所有模型相依性都必須在 conda 環境中宣告。All model dependencies must be declared in the conda environment.

啟用和停用模型服務Enable and disable model serving

您可以啟用從其 註冊的 [模型] 頁面提供的模型。You enable a model for serving from its registered model page.

  1. 按一下 [ 服務 ] 索引標籤。如果尚未啟用模型來提供服務,則會顯示 [ 啟用服務 ] 按鈕。Click the Serving tab. If the model is not already enabled for serving, the Enable Serving button appears.
  2. 按一下 [ 啟用服務]。Click Enable Serving. [服務] 索引標籤會顯示 狀態 為 [擱置中]。The Serving tab appears with the Status as Pending. 幾分鐘後, 狀態 就會變更為 [就緒]。After a few minutes, the Status changes to Ready.

若要停用用於服務的模型,請按一下 [ 停止]。To disable a model for serving, click Stop.

模型登錄中提供的模型Model serving from Model Registry

您可以在模型登錄 UI 中,提供已註冊模型的服務。You enable serving of a registered model in Model Registry UI.

啟用服務Enable serving

模型版本 UriModel version URIs

每個已部署的模型版本都會被指派一或多個唯一的 Uri。Each deployed model version is assigned one or several unique URIs. 每個模型版本至少會被指派如下所示的 URI:At minimum, each model version is assigned a URI constructed as follows:

<databricks-instance>/model/<registered-model-name>/<model-version>/invocations

例如,若要呼叫註冊為之模型的第1版 iris-classifier ,請使用此 URI:For example, to call version 1 of a model registered as iris-classifier, use this URI:

https://<databricks-instance>/model/iris-classifier/1/invocations

您也可以透過其階段呼叫模型版本。You can also call a model version by its stage. 例如,如果第1版位於 生產 階段,也可以使用此 URI 進行計分:For example, if version 1 is in the Production stage, it can also be scored using this URI:

https://<databricks-instance>/model/iris-classifier/Production/invocations

可用模型 Uri 的清單會出現在 [服務] 頁面上的 [模型版本] 索引標籤頂端。The list of available model URIs appears at the top of the Model Versions tab on the serving page.

管理提供的版本Manage served versions

系統會部署所有使用中 (非封存的) 模型版本,您可以使用 Uri 進行查詢。All active (non-archived) model versions are deployed, and you can query them using the URIs. Azure Databricks 會在註冊新的模型版本時自動進行部署,並在封存舊版本時自動予以移除。Azure Databricks automatically deploys new model versions when they are registered, and automatically removes old versions when they are archived.

注意

已註冊模型的所有已部署版本都會共用相同的叢集。All deployed versions of a registered model share the same cluster.

管理模型存取權限Manage model access rights

模型存取權限會繼承自模型登錄。Model access rights are inherited from the Model Registry. 啟用或停用服務功能需要已註冊之模型的「管理」許可權。Enabling or disabling the serving feature requires ‘manage’ permission on the registered model. 具有讀取權限的任何人都可以評分任何已部署的版本。Anyone with read rights can score any of the deployed versions.

評分部署的模型版本Score deployed model versions

若要為已部署的模型評分,您可以使用 UI 或傳送 REST API 要求至模型 URI。To score a deployed model, you can use the UI or send a REST API request to the model URI.

透過 UI 評分Score via UI

這是測試模型最簡單且最快速的方式。This is the easiest and fastest way to test the model. 您可以插入 JSON 格式的模型輸入資料,然後按一下 [ 傳送要求]。You can insert the model input data in JSON format and click Send Request. 如果已使用輸入範例記錄模型 (如上圖) 所示,請按一下 [ 載入範例 ] 載入輸入範例。If the model has been logged with an input example (as shown in the graphic above), click Load Example to load the input example.

透過 REST API 要求評分Score via REST API request

您可以使用 標準 Databricks 驗證,透過 REST API 傳送評分要求。You can send a scoring request through the REST API using standard Databricks authentication. 下列範例示範如何使用個人存取權杖進行驗證。The examples below demonstrate authentication using a personal access token.

假設有 MODEL_VERSION_URI 類似的 https://<databricks-instance>/model/iris-classifier/Production/invocations (,其中 <databricks-instance> 是您的 Databricks 實例名稱) 和呼叫的 Databricks REST API token DATABRICKS_API_TOKEN ,以下是如何查詢服務模型的一些範例程式碼片段:Given a MODEL_VERSION_URI like https://<databricks-instance>/model/iris-classifier/Production/invocations (where <databricks-instance> is the name of your Databricks instance) and a Databricks REST API token called DATABRICKS_API_TOKEN, here are some example snippets of how to query a served model:

BashBash

curl -u token:$DATABRICKS_API_TOKEN $MODEL_VERSION_URI \
  -H 'Content-Type: application/json; format=pandas-records' \
  -d '[
    {
      "sepal_length": 5.1,
      "sepal_width": 3.5,
      "petal_length": 1.4,
      "petal_width": 0.2
    }
  ]'

PythonPython

import requests

def score_model(model_uri, databricks_token, data):
  headers = {
    "Authorization": f"Bearer {databricks_token}",
    "Content-Type": "application/json; format=pandas-records",
  }
  data_json = data if isinstance(data, list) else data.to_dict(orient="records")
  response = requests.request(method='POST', headers=headers, url=model_uri, json=data_json)
  if response.status_code != 200:
      raise Exception(f"Request failed with status {response.status_code}, {response.text}")
  return response.json()

data = [{
  "sepal_length": 5.1,
  "sepal_width": 3.5,
  "petal_length": 1.4,
  "petal_width": 0.2
}]
score_model(MODEL_VERSION_URI, DATABRICKS_API_TOKEN, data)

# can also score DataFrames
import pandas as pd
score_model(MODEL_VERSION_URI, DATABRICKS_API_TOKEN, pd.DataFrame(data))

PowerbiPowerbi

您可以使用下列步驟,在 Power BI Desktop 中為資料集評分:You can score a dataset in Power BI Desktop using the following steps:

  1. 開啟您想要評分的資料集。Open dataset you want to score.

  2. 移至 [轉換資料]。Go to Transform Data.

  3. 在左面板中按一下滑鼠右鍵,然後選取 [ 建立新查詢]。Right-click in the left panel and select Create New Query.

  4. 移至 [ View > 進階編輯器Go to View > Advanced Editor.

  5. 填入適當的和之後,將查詢主體取代為下面的程式碼片段 DATABRICKS_API_TOKEN MODEL_VERSION_URIReplace the query body with the code snippet below, after filling in an appropriate DATABRICKS_API_TOKEN and MODEL_VERSION_URI.

    (dataset as table ) as table =>
    let
      call_predict = (dataset as table ) as list =>
      let
        apiToken = DATABRICKS_API_TOKEN,
        modelUri = MODEL_VERSION_URI,
        responseList = Json.Document(Web.Contents(modelUri,
          [
            Headers = [
              #"Content-Type" = "application/json; format=pandas-records",
              #"Authorization" = Text.Format("Bearer #{0}", {apiToken})
            ],
            Content = Json.FromValue(dataset)
          ]
        ))
      in
        responseList,
      predictionList = List.Combine(List.Transform(Table.Split(dataset, 256), (x) => call_predict(x))),
      predictionsTable = Table.FromList(predictionList, (x) => {x}, {"Prediction"}),
      datasetWithPrediction = Table.Join(
        Table.AddIndexColumn(predictionsTable, "index"), "index",
        Table.AddIndexColumn(dataset, "index"), "index")
    in
      datasetWithPrediction
    
  6. 將查詢命名為您所需的模型名稱。Name the query with your desired model name.

  7. 開啟資料集的 [advanced query 編輯器],並套用模型函數。Open the advanced query editor for your dataset and apply the model function.

如需伺服器所接受之輸入資料格式的詳細資訊 (例如,pandas 分割導向格式) ,請參閱 MLflow 檔For more information about input data formats accepted by the server (for example, pandas split-oriented format), see the MLflow documentation.

監視服務模型Monitor served models

[服務] 頁面會顯示服務叢集的狀態指標,以及個別的模型版本。The serving page displays status indicators for the serving cluster as well as individual model versions. 此外,您可以使用下列內容來取得進一步資訊:In addition, you can use the following to obtain further information:

  • 若要檢查服務叢集的狀態,請使用 [ 模型事件 ] 索引標籤,此索引標籤會顯示此模型所有服務事件的清單。To inspect the state of the serving cluster, use the Model Events tab, which displays a list of all serving events for this model.
  • 若要檢查單一模型版本的狀態,請使用 [模型版本] 索引標籤上的 [記錄 檔] 或 [版本事件] 索引標籤。To inspect the state of a single model version, use the Logs or Version Events tabs on the Model Versions tab.

版本狀態Version status

模型事件Model events