OnnxConverter.Convert(PredictionModel) OnnxConverter.Convert(PredictionModel) OnnxConverter.Convert(PredictionModel) Method

Definition

ONNX is an intermediate representation format for machine learning models. It is used to make models portable such that you can train a model using a toolkit and run it in another tookit's runtime, for example, you can create a model using ML.NET, export it to an ONNX-ML model file, then load and run that ONNX-ML model in Windows ML, on an UWP Windows 10 app.

     This API converts an ML.NET model to ONNX-ML format by inspecting the transform pipeline
     from the end, checking for components that know how to save themselves as ONNX.
     The first item in the transform pipeline that does not know how to save itself
     as ONNX, is considered the "input" to the ONNX pipeline. (Ideally this would be the
     original loader itself, but this may not be possible if the user used unsavable
     transforms in defining the pipe.) All the columns in the source that are a type the
     ONNX knows how to deal with will be tracked. Intermediate transformations of the
     data appearing as new columns will appear in the output block of the ONNX, with names
     derived from the corresponding column names. The ONNX JSON will be serialized to a
     path defined through the Json option.

     This API supports the following arguments:
     <xref data-throw-if-not-resolved="true" uid="Microsoft.ML.Models.OnnxConverter.Onnx"></xref> indicates the file to write the ONNX protocol buffer file to. This is required.
     <xref data-throw-if-not-resolved="true" uid="Microsoft.ML.Models.OnnxConverter.Json"></xref> indicates the file to write the JSON representation of the ONNX model. This is optional.
     <xref data-throw-if-not-resolved="true" uid="Microsoft.ML.Models.OnnxConverter.Name"></xref> indicates the name property in the ONNX model. If left unspecified, it will
     be the extension-less name of the file specified in the onnx indicates the protocol buffer file
     to write the ONNX representation to.
     <xref data-throw-if-not-resolved="true" uid="Microsoft.ML.Models.OnnxConverter.Domain"></xref> indicates the domain name of the model. ONNX uses reverse domain name space indicators.
     For example com.microsoft.cognitiveservices. This is a required field.
     <xref data-throw-if-not-resolved="true" uid="Microsoft.ML.Models.OnnxConverter.InputsToDrop"></xref> is a string array of input column names to omit from the input mapping.
     A common scenario might be to drop the label column, for instance, since it may not be practically
     useful for the pipeline. Note that any columns depending on these naturally cannot be saved.
     <xref data-throw-if-not-resolved="true" uid="Microsoft.ML.Models.OnnxConverter.OutputsToDrop"></xref> is similar, except for the output schema. Note that the pipeline handler
     is currently not intelligent enough to drop intermediate calculations that produce this value: this will
     merely omit that value from the actual output.

     Transforms that can be exported to ONNX
     1. Concat
     2. KeyToVector
     3. NAReplace
     4. Normalize
     5. Term
     6. Categorical

     Learners that can be exported to ONNX
     1. FastTree
     2. LightGBM
     3. Logistic Regression

     See <a href="https://github.com/dotnet/machinelearning/blob/master/test/Microsoft.ML.Tests/OnnxTests.cs"></a>
     for an example on how to train a model and then convert that model to ONNX.
public void Convert (Microsoft.ML.PredictionModel model);
member this.Convert : Microsoft.ML.PredictionModel -> unit
Public Sub Convert (model As PredictionModel)
Parameters
model
PredictionModel PredictionModel PredictionModel

Model that needs to be converted to ONNX format.

Remarks

ONNX is an intermediate representation format for machine learning models. It is used to make models portable such that you can train a model using a toolkit and run it in another toolkit's runtime. For example, you create a model using ML.NET or any ONNX compatible toolkit, convert it to ONNX and then the ONNX model can be converted into say, CoreML, TensorFlow, or WinML model to run on the respective runtime.

This API converts an ML.NET model to ONNX format by inspecting the transform pipeline from the end, checking for components that know how to save themselves as ONNX. The first item in the transform pipeline that does not know how to save itself as ONNX, is considered the "input" to the ONNX pipeline (Ideally, this would be the original loader itself, but this may not be possible if the user used unsavable transforms in defining the pipe.). All the columns in the source that are a type the ONNX knows how to deal with will be tracked. Intermediate transformations of the data appearing as new columns will appear in the output block of the ONNX, with names derived from the corresponding column names. The ONNX JSON will be serialized to a path defined through the Json option.

This API supports the following arguments:

  • Onnx indicates the file to write the ONNX protocol buffer file to. This is optional.
  • Json indicates the file to write the JSON representation of the ONNX model. This is optional.
  • Name indicates the name property in the ONNX model. If left unspecified, it will be the extension-less name of the file specified in the ONNX indicates the protocol buffer file to write the ONNX representation to.
  • Domain indicates the domain name of the model. ONNX uses reverse domain name space indicators. For example com.microsoft.cognitiveservices. This is a required field.
  • InputsToDrop is a string array of input column names to omit from the input mapping. A common scenario might be to drop the label column, for instance, since it may not be practically useful for the pipeline. Note that any columns depending on these naturally cannot be saved.
  • OutputsToDrop is similar, except for the output schema. Note that the pipeline handler is currently not intelligent enough to drop intermediate calculations that produce this value: this will merely omit that value from the actual output.

Transforms that can be exported to ONNX:

  1. Concat
  2. KeyToVector
  3. NAReplace
  4. Normalize
  5. Term
  6. Categorical

Learners that can be exported to ONNX:

  1. FastTree
  2. LightGBM
  3. Logistic Regression

See the ONNX unit test at the dotnet/machinelearning GitHub repo for an example on how to train a model and then convert that model to ONNX.

Applies to