使用 WinMLTools 將 ML 模型轉換成 ONNXConvert ML models to ONNX with WinMLTools

WinMLTools 可讓您將使用不同訓練架構所建立的機器學習模型轉換成 ONNX。WinMLTools enables you to convert machine learning models created with different training frameworks into ONNX. 用來將模型轉換成 ONNX 以便能與 Windows ML 搭配使用的是 ONNXMLToolsTF2ONNX 的延伸模組。It is an extension of ONNXMLTools and TF2ONNX to convert models to ONNX for use with Windows ML.

WinMLTools 目前支援下列架構的轉換:WinMLTools currently supports conversion from the following frameworks:

  • Apple Core MLApple Core ML
  • KerasKeras
  • scikit-learnscikit-learn
  • lightgbmlightgbm
  • xgboostxgboost
  • libSVMlibSVM
  • TensorFlow (實驗性)TensorFlow (experimental)

若要了解如何從其他 ML 架構匯出,請參閱 GitHub 上的 ONNX 教學課程To learn how to export from other ML frameworks, take a look at the ONNX tutorials on GitHub.

在本文中,我們會示範如何使用 WinMLTools 來執行下列動作:In this article, we demonstrate how to use WinMLTools to:

  • 將 Core ML 模型轉換成 ONNXConvert Core ML models into ONNX
  • 將 scikit-learn 模型轉換成 ONNXConvert scikit-learn models into ONNX
  • 將 TensorFlow 模型轉換成 ONNXConvert TensorFlow models into ONNX
  • 將訓練後權數量化套用至 ONNX 模型Apply post-training weight quantization to ONNX models
  • 將浮點數模型轉換成 16 位元的浮點數精確度模型Convert floating point models to 16-bit floating point precision models
  • 建立自訂 ONNX 運算子Create custom ONNX operators

注意

最新版的 WinMLTools 可支援轉換成 ONNX 1.2.2 版、1.3 版和 1.4 版,分別如 ONNX opset 7、8 和 9 所指定。The latest version of WinMLTools supports conversion to ONNX versions 1.2.2, 1.3, and 1.4, as specified respectively by ONNX opsets 7 and 8, and 9. 舊版工具不支援用於 ONNX 1.4。Previous versions of the tool do not have support for ONNX 1.4.

安裝 WinMLToolsInstall WinMLTools

WinMLTools 是一個 Python 套件 (winmltools),可支援 Python 2.7 和 3.6 版本。WinMLTools is a Python package (winmltools) that supports Python versions 2.7 and 3.6. 如果您正在從事資料科學專案,我們推薦安裝科學 Python 發行版,例如 Anaconda。If you are working on a data science project, we recommend installing a scientific Python distribution such as Anaconda.

注意

WinMLTools 目前不支援 Python 3.7。WinMLTools does not currently support Python 3.7.

WinMLTools 遵循標準 Python 套件安裝程序WinMLTools follows the standard Python package installation process. 從您的 Python 環境中執行:From your Python environment, run:

pip install winmltools

WinMLTools 有下列相依性:WinMLTools has the following dependencies:

  • numpy v1.10.0+numpy v1.10.0+
  • protobuf v.3.6.0+protobuf v.3.6.0+
  • onnx v1.3.0+onnx v1.3.0+
  • onnxmltools v1.3.0+onnxmltools v1.3.0+
  • tf2onnx v0.3.2+tf2onnx v0.3.2+

若要更新相依套件,請使用 -U 引數來執行 pip 命令。To update the dependent packages, run the pip command with the -U argument.

pip install -U winmltools

不同的轉換器必須安裝不同的套件。For different converters, you will have to install different packages.

如需 libsvm,您可以從各種 Web 來源下載 libsvm wheelFor libsvm, you can download libsvm wheel from various web sources. 您可以在加州大學歐文分校的網站找到其中一個範例。One example can be found at the University of California, Irvine's website.

如需 coremltools,Apple 目前未在 Windows 上散發 Core ML 封裝。For coremltools, currently Apple does not distribute Core ML packaging on Windows. 您可以從來源安裝:You can install from source:

pip install git+https://github.com/apple/coremltools

請遵循 GitHub 上的 onnxmltools,以取得更多有關 onnxmltools 相依性的資訊。Follow onnxmltools on GitHub for further information on onnxmltools dependencies.

有關如何使用 WinMLTools 的其他詳細資料,見於包含 help 功能的套件專用檔案。Additional details on how to use WinMLTools can be found on the package-specific documentation with the help function.

help(winmltools)

轉換 Core ML 模型Convert Core ML models

我們假設範例 Core ML 模型檔案的路徑是 example.mlmodelHere, we assume that the path of an example Core ML model file is example.mlmodel.

from coremltools.models.utils import load_spec
# Load model file
model_coreml = load_spec('example.mlmodel')
from winmltools import convert_coreml
# Convert it!
# The automatic code generator (mlgen) uses the name parameter to generate class names.
model_onnx = convert_coreml(model_coreml, 7, name='ExampleModel')

注意

convert_coreml() 呼叫中的第二個參數是 target_opset,其會參考預設命名空間 ai.onnx 中的運算子版本號碼。The second parameter in the call to convert_coreml() is the target_opset, and it refers to the version number of the operators in the default namespace ai.onnx. 如需這些運算子的詳細資訊,請參閱這裡See more details on these operators here. 這個參數僅適用於最新版的 WinMLTools,其可讓開發人員以不同 ONNX 版本作為目標 (目前支援的版本是 1.2.2 和 1.3)。This parameter is only available on the latest version of WinMLTools, enabling developers to target different ONNX versions (currently versions 1.2.2 and 1.3 are supported). 若要轉換模型以便使用 2018 年 10 月更新的 Windows 10 來執行,請使用 target_opset 7 (ONNX v1.2.2)。To convert models to run with the Windows 10 October 2018 update, use target_opset 7 (ONNX v1.2.2). 對於大於 17763 的 Windows 10 組建,WinML 會接受具有 target_opset 7 和 8 (ONNX v.1.3) 的模型。For Windows 10 builds greater than 17763, WinML accepts models with target_opset 7 and 8 (ONNX v.1.3). 版本資訊一節也包含 WinML 在不同組建所支援的最小和最大 ONNX 版本。The Release Notes section also contains the min and max ONNX versions supported by WinML in different builds.

model_onnx 是 ONNX ModelProto 物件。model_onnx is an ONNX ModelProto object. 我們可以將其儲存為兩種不同格式。We can save it in two different formats.

from winmltools.utils import save_model
# Save the produced ONNX model in binary format
save_model(model_onnx, 'example.onnx')
# Save the produced ONNX model in text format
from winmltools.utils import save_text
save_text(model_onnx, 'example.txt')

注意

Core MLTools 是 Apple 提供的 Python 套件,但在 Windows 上不可用。Core MLTools is a Python package provided by Apple, but is not available on Windows. 如果您需要在 Windows 上安裝套件,請直接從存放庫安裝套件:If you need to install the package on Windows, install the package directly from the repo:

pip install git+https://github.com/apple/coremltools

轉換具有影像輸入或輸出的 Core ML 模型Convert Core ML models with image inputs or outputs

由於在 ONNX 中缺少影像類型,轉換 Core ML 影像模型 (也就是使用影像作為輸入或輸出的模型) 需要一些前置處理和後續處理步驟。Because of the lack of image types in ONNX, converting Core ML image models (that is, models using images as inputs or outputs) requires some pre-processing and post-processing steps.

前置處理的目標是確保輸入影像能正確的格式化為 ONNX 張量。The objective of pre-processing is to make sure the input image is properly formatted as an ONNX tensor. 假設 X 是 Core ML 中圖形為 [C, H, W] 的影像輸入。Assume X is an image input with shape [C, H, W] in Core ML. 在 ONNX 中,變數 X 將是具有相同圖形的浮點數張量,而 X [0, :, :]/X [1, :, :]/X [2, :, :] 儲存影像的紅色/綠色/藍色頻道。In ONNX, the variable X would be a floating-point tensor with the same shape and X[0, :, :]/X[1, :, :]/X[2, :, :] stores the image's red/green/blue channel. 在 Core ML 中的灰階影像,其格式是 ONNX 中的 [1, H, W]-張量,因為我們只會有一個頻道。For grayscale images in Core ML, their format is [1, H, W]-tensors in ONNX because we only have one channel.

如果原始 Core ML 模型輸出影像,則以手動方式將 ONNX 的浮點數輸出張量轉換成影像。If the original Core ML model outputs an image, manually convert ONNX's floating-point output tensors back into images. 有兩個基本步驟。There are two basic steps. 第一個步驟是截斷大於 255 到 255 的值,並將所有負值變更為 0。The first step is to truncate values greater than 255 to 255 and change all negative values to 0. 第二個步驟是將所有像素值四捨五入至整數 (透過加上 0.5,然後截斷小數點)。The second step is to round all pixel values to integers (by adding 0.5 and then truncating the decimals). 在 Core ML 模型中指定輸出頻道順序 (例如 RGB 或 BGR)。The output channel order (for example, RGB or BGR) is indicated in the Core ML model. 若要產生適當的影像輸出,我們需要轉置或隨機播放以恢復所需的格式。To generate proper image output, we may need to transpose or shuffle to recover the desired format.

我們認為從 GitHub 下載的 Core ML 模型、FNS-Candy,可作為示範 ONNX 和 Core ML 格式之間差異的具體轉換範例。Here we consider a Core ML model, FNS-Candy, downloaded from GitHub, as a concrete conversion example to demonstrate the difference between ONNX and Core ML formats. 請注意,所有後續命令都是在 Python 環境中執行的。Note that all the subsequent commands are executed in a Python environment.

首先,我們載入 Core ML 模型並檢查其輸入和輸出格式。First, we load the Core ML model and examine its input and output formats.

from coremltools.models.utils import load_spec
model_coreml = load_spec('FNS-Candy.mlmodel')
model_coreml.description # Print the content of Core ML model description

螢幕輸出:Screen output:

...
input {
    ...
      imageType {
      width: 720
      height: 720
      colorSpace: BGR
    ...
}
...
output {
    ...
      imageType {
      width: 720
      height: 720
      colorSpace: BGR
    ...
}
...

此時,輸入和輸出都是 720 x 720 BGR 影像。In this case, both the input and output are 720x720 BGR-images. 接下來就是要將 Core ML 模型與 WinMLTools 進行轉換。Our next step is to convert the Core ML model with WinMLTools.

# The automatic code generator (mlgen) uses the name parameter to generate class names.
from winmltools import convert_coreml
model_onnx = convert_coreml(model_coreml, 7, name='FNSCandy')

在 ONNX 中檢視模型輸入和輸出格式的另一種方法是使用下列命令:An alternative method to view the model input and output formats in ONNX is to use the following command:

model_onnx.graph.input # Print out the ONNX input tensor's information

螢幕輸出:Screen output:

...
  tensor_type {
    elem_type: FLOAT
    shape {
      dim {
        dim_param: "None"
      }
      dim {
        dim_value: 3
      }
      dim {
        dim_value: 720
      }
      dim {
        dim_value: 720
      }
    }
  }
...

在 ONNX 中產生的輸入 (以 X 表示) 是一個 4-D 張量。The produced input (denoted by X) in ONNX is a 4-D tensor. 最後 3 個軸分別是 C-軸、H-軸 和 W-軸。The last 3 axes are C-, H-, and W-axes, respectively. 第一個維度是「無」,表示這個 ONNX 模型可以套用到任意數量的影像。The first dimension is "None" which means that this ONNX model can be applied to any number of images. 要套用此模型處理一批 2 張影像,第一張影像對應至 X [0, :, :, :],而 X [1, :, :, :] 對應至第二張影像。To apply this model to process a batch of 2 images, the first image corresponds to X[0, :, :, :] while X[1, :, :, :] corresponds to the second image. 第一張影像的藍色/綠色/紅色頻道為 X [0, 0, :, :]/X [0, 1, :, :]/X [0, 2, :, :],而第二張影像的頻道類似。The blue/green/red channels of the first image are X[0, 0, :, :]/X[0, 1, :, :]/X[0, 2, :, :], and similar for the second image.

model_onnx.graph.output # Print out the ONNX output tensor's information

螢幕輸出:Screen output:

...
  tensor_type {
    elem_type: FLOAT
    shape {
      dim {
        dim_param: "None"
      }
      dim {
        dim_value: 3
      }
      dim {
        dim_value: 720
      }
      dim {
        dim_value: 720
      }
    }
  }
...

您可以看到產生的格式與原始模型輸入格式相同。As you can see, the produced format is identical to the original model input format. 不過,在這種情況下,其並非影像,因為像素值是整數,而不是浮點數。However, in this case, it's not an image because the pixel values are integers, not floating-point numbers. 要還原成影像,請將大於 255 到 255 的值截斷,將負值變更為 0,並將所有小數四捨五入為整數。To convert back to an image, truncate values greater than 255 to 255, change negative values to 0, and round all decimals to integers.

轉換 scikit-learn 模型Convert scikit-learn models

下列程式碼片段在 scikit-learn 中訓練線性支援向量電腦,並將模型轉換成 ONNX。The following code snippet trains a linear support vector machine in scikit-learn and converts the model into ONNX.

# First, we create a toy data set.
# The training matrix X contains three examples, with two features each.
# The label vector, y, stores the labels of all examples.
X = [[0.5, 1.], [-1., -1.5], [0., -2.]]
y = [1, -1, -1]

# Then, we create a linear classifier and train it.
from sklearn.svm import LinearSVC
linear_svc = LinearSVC()
linear_svc.fit(X, y)

# To convert scikit-learn models, we need to specify the input feature's name and type for our converter.
# The following line means we have a 2-D float feature vector, and its name is "input" in ONNX.
# The automatic code generator (mlgen) uses the name parameter to generate class names.
from winmltools import convert_sklearn
from winmltools.convert.common.data_types import FloatTensorType
linear_svc_onnx = convert_sklearn(linear_svc, 7, name='LinearSVC',
                                  initial_types=[('input', FloatTensorType([1, 2]))])

# Now, we save the ONNX model into binary format.
from winmltools.utils import save_model
save_model(linear_svc_onnx, 'linear_svc.onnx')

# If you'd like to load an ONNX binary file, our tool can also help.
from winmltools.utils import load_model
linear_svc_onnx = load_model('linear_svc.onnx')

# To see the produced ONNX model, we can print its contents or save it in text format.
print(linear_svc_onnx)
from winmltools.utils import save_text
save_text(linear_svc_onnx, 'linear_svc.txt')

# The conversion of linear regression is very similar. See the example below.
from sklearn.svm import LinearSVR
linear_svr = LinearSVR()
linear_svr.fit(X, y)
linear_svr_onnx = convert_sklearn(linear_svr, 7, name='LinearSVR',
                                  initial_types=[('input', FloatTensorType([1, 2]))])

和之前一樣,convert_sklearn 會將 scikit-learn 模型作為其第一個引數,並將 target_opset 作為第二個引數。As before convert_sklearn takes a scikit-learn model as its first argument, and the target_opset for the second argument. 使用者可以使用其他 scikit-learn 模型取代 LinearSVC,例如 RandomForestClassifierUsers can replace LinearSVC with other scikit-learn models such as RandomForestClassifier. 請注意,mlgen 會使用 name 參數來產生類別名稱和變數。Please note that mlgen uses the name parameter to generate class names and variables. 如果沒有提供 name,則會產生 GUID,這將不符合像 C++/C# 此類語言的變數命名慣例。If name is not provided, then a GUID is generated, which will not comply with variable naming conventions for languages like C++/C#.

轉換 scikit-learn 管線Convert scikit-learn pipelines

接下來,我們會示範如何將 scikit-learn 管線轉換成 ONNX。Next, we show how scikit-learn pipelines can be converted into ONNX.

# First, we create a toy data set.
# Notice that the first example's last feature value, 300, is large.
X = [[0.5, 1., 300.], [-1., -1.5, -4.], [0., -2., -1.]]
y = [1, -1, -1]

# Then, we declare a linear classifier.
from sklearn.svm import LinearSVC
linear_svc = LinearSVC()

# One common trick to improve a linear model's performance is to normalize the input data.
from sklearn.preprocessing import Normalizer
normalizer = Normalizer()

# Here, we compose our scikit-learn pipeline.
# First, we apply our normalization.
# Then we feed the normalized data into the linear model.
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(normalizer, linear_svc)
pipeline.fit(X, y)

# Now, we convert the scikit-learn pipeline into ONNX format.
# Compared to the previous example, notice that the specified feature dimension becomes 3.
# The automatic code generator (mlgen) uses the name parameter to generate class names.
from winmltools import convert_sklearn
from winmltools.convert.common.data_types import FloatTensorType, Int64TensorType
pipeline_onnx = convert_sklearn(linear_svc, name='NormalizerLinearSVC',
                                input_features=[('input', FloatTensorType([1, 3]))])

# We can print the fresh ONNX model.
print(pipeline_onnx)

# We can also save the ONNX model into a binary file for later use.
from winmltools.utils import save_model
save_model(pipeline_onnx, 'pipeline.onnx')

# Our conversion framework provides limited support of heterogeneous feature values.
# We cannot have numerical types and string types in one feature vector.
# Let's assume that the first two features are floats and the last feature is integers (encoded a categorical attribute).
X_heter = [[0.5, 1., 300], [-1., -1.5, 400], [0., -2., 100]]

# One popular way to represent categorical is one-hot encoding.
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = OneHotEncoder(categorical_features=[2])

# Let's initialize a classifier.
# It will be right after the one-hot encoder in our pipeline.
linear_svc = LinearSVC()

# Then, we form a two-stage pipeline.
another_pipeline = make_pipeline(one_hot_encoder, linear_svc)
another_pipeline.fit(X_heter, y)

# Now, we convert, save, and load the converted model.
# For heterogeneous feature vectors, we need to separately specify their types for
# all homogeneous segments.
# The automatic code generator (mlgen) uses the name parameter to generate class names.
another_pipeline_onnx = convert_sklearn(another_pipeline, name='OneHotLinearSVC',
                                        input_features=[('input', FloatTensorType([1, 2])),
                                        ('another_input', Int64TensorType([1, 1]))])
save_model(another_pipeline_onnx, 'another_pipeline.onnx')
from winmltools.utils import load_model
loaded_onnx_model = load_model('another_pipeline.onnx')

# Of course, we can print the ONNX model to see if anything went wrong.
print(another_pipeline_onnx)

轉換 TensorFlow 模型Convert TensorFlow models

下列程式碼會舉例說明如何從凍結的 TensorFlow 模型來轉換模型。The following code is an example of how to convert a model from a frozen TensorFlow model. 若要取得 TensorFlow 模型的可能輸出名稱,您可以使用 summarize_graph 工具To get the possible output names of a TensorFlow model, you can use the summarize_graph tool.

import winmltools
import tensorflow

filename = 'frozen-model.pb'
output_names = ['output:0']

graph_def = graph_pb2.GraphDef()
with open(filename, 'rb') as file:
  graph_def.ParseFromString(file.read())
g = tf.import_graph_def(graph_def, name='')

with tf.Session(graph=g) as sess:
  converted_model = winmltools.convert_tensorflow(sess.graph, 7, output_names=['output:0'])
  winmltools.save_model(converted_model)

WinMLTools 轉換器會使用 TF2ONNX 中的 tf2onnx.tfonnx.process_tf_graphThe WinMLTools converter uses tf2onnx.tfonnx.process_tf_graph in TF2ONNX.

轉換成浮點數 16Convert to floating point 16

WinMLTools 支援將使用浮點數 32 表示的模型轉換成使用浮點數 16 來表示 (IEEE 754 一半),藉由將其大小減為一半來有效地壓縮模型。WinMLTools supports the conversion of models represented in floating point 32 into a floating point 16 representation (IEEE 754 half), effectively compressing the model by reducing its size in half.

注意

將模型轉換成浮點數 16 可能會導致精確度喪失。Converting your model to floating point 16 could result in a loss of accuracy. 在將模型部署到應用程式之前,請務必先確認其精確度。Make sure you verify the model's accuracy before deploying into your application.

如果您想要直接從 ONNX 二進位檔案進行轉換,以下是完整範例。Below is a full example if you want to convert directly from an ONNX binary file.

from winmltools.utils import convert_float_to_float16
from winmltools.utils import load_model, save_model
onnx_model = load_model('model.onnx')
new_onnx_model = convert_float_to_float16(onnx_model)
save_model(new_onnx_model, 'model_fp16.onnx')

使用 help(winmltools.utils.convert_float_to_float16) 便可找到更多關於此工具的詳細資料。With help(winmltools.utils.convert_float_to_float16), you can find more details about this tool. WinMLTools 中的浮點數 16 目前僅符合 IEEE 754 浮點數標準 (2008)The floating point 16 in WinMLTools currently only complies with IEEE 754 floating point standard (2008).

訓練後權數量化Post-training weight quantization

WinMLTools 也支援將使用浮點數 32 表示的模型壓縮成使用 8 位元整數來表示。WinMLTools also supports the compression of models represented in floating point 32 into 8-bit integer representations. 視模型而定,這麼做最多可能會讓磁碟使用量減少 75%。This could yield a disk footprint reduction of up to 75% depending on the model. 此減量成果是透過稱為訓練後權數量化的技術來實現的,這項技術會分析模型,並將所儲存的張量權數從 32 位元浮點數資料縮減為 8 位元的資料。This reduction is done via a technique called post-training weight quantization, where the model is analyzed and the stored tensor weights are reduced from 32-bit floating point data into 8-bit data.

注意

訓練後權數量化可能會導致所產生的模型喪失精確度。Post-training weight quantization may result in loss of accuracy in the resulting model. 在將模型部署到應用程式之前,請務必先確認其精確度。Make sure you verify the model's accuracy before deploying into your application.

以下是示範如何直接從 ONNX 二進位檔案進行轉換的完整範例。The following is a complete example that demonstrates how to convert directly from an ONNX binary file.

import winmltools

model = winmltools.load_model('model.onnx')
packed_model = winmltools.quantize(model, per_channel=True, nbits=8, use_dequantize_linear=True)
winmltools.save_model(packed_model, 'quantized.onnx')

以下是 quantize 的輸入參數部分相關資訊:Here is some information about the input parameters to quantize:

  • per_channel:如果設定為 True,這會以線性方式針對使用 [n,c,h,w] 格式的每個已初始化張量量化其每個通道。per_channel: If set to True, this will linearly quantize each channel for each initialized tensor in [n,c,h,w] format. 根據預設,此參數設定為 TrueBy default, this parameter is set to True.
  • nbits:用來表示量化值的位元數。nbits: The number of bits to represent quantized values. 目前僅支援 8 位元。Currently only 8 bits is supported.
  • use_dequantize_linear:如果設定為 True,這會針對使用 [n,c,h,w] 格式的 Conv 運算子,以線性方式將其已初始化張量中的每個通道去量化。use_dequantize_linear: If set to True, this will linearly dequantize each channel in initialized tensors for Conv operators in [n,c,h,w] format. 根據預設,這項設定為 TrueBy default, this is set to True.

建立自訂 ONNX 運算子Create custom ONNX operators

在從 Keras 或 Core ML 模型進行轉換時,您可以撰寫自訂運算子函式,將自訂運算子內嵌至 ONNX 圖表。When converting from a Keras or a Core ML model, you can write a custom operator function to embed custom operators into the ONNX graph. 轉換期間,轉換器會叫用函式以將 Keras 層或 Core ML LayerParameter 轉譯為 ONNX 運算子,然後再將運算子節點連接到整個圖表。During the conversion, the converter invokes your function to translate the Keras layer or the Core ML LayerParameter to an ONNX operator, and then it connects the operator node into the whole graph.

  1. 建立自訂函式以建置 ONNX 子圖表。Create the custom function for the ONNX sub-graph building.
  2. 使用自訂層名稱與自訂函式的對應來呼叫 winmltools.convert_keraswinmltools.convert_coremlCall winmltools.convert_keras or winmltools.convert_coreml with the map of the custom layer name to the custom function.
  3. 如果適用的話,請為推斷執行階段實作自訂層。If applicable, implement the custom layer for the inference runtime.

下列範例顯示其在 Keras 中的運作方式。The following example shows how it works in Keras.

# Define the activation layer.
class ParametricSoftplus(keras.layers.Layer):
    def __init__(self, alpha, beta, **kwargs):
    ...
    ...
    ...

# Create the convert function.
def convert_userPSoftplusLayer(scope, operator, container):
      return container.add_node('ParametricSoftplus', operator.input_full_names, operator.output_full_names,
        op_version=1, alpha=operator.original_operator.alpha, beta=operator.original_operator.beta)

winmltools.convert_keras(keras_model, 7,
    custom_conversion_functions={ParametricSoftplus: convert_userPSoftplusLayer })

注意

使用下列資源取得 Windows ML 的說明:Use the following resources for help with Windows ML:

  • 如需詢問或回答有關 Windows ML 的技術問題,請使用 Stack Overflow 上的 windows-machine-learning 標籤。To ask or answer technical questions about Windows ML, please use the windows-machine-learning tag on Stack Overflow.
  • 如需回報錯誤 (bug),請在 GitHub 上提出問題。To report a bug, please file an issue on our GitHub.
  • 如需要求功能,請前往 Windows 開發人員意見反應To request a feature, please head over to Windows Developer Feedback.