CoreML Namespace

The CoreML namespace provides relatively high-level access to machine learning model runtimes.

Classes

MLArrayBatchProvider

An IMLBatchProvider backed by an array.

MLCustomLayer_Extensions

Default implementations for optional methods in the IMLCustomLayer protocol.

MLCustomModel

Default implementation of IMLCustomModel.

MLCustomModel_Extensions

Default implementations for optional methods in the MLCustomModel protocol.

MLDictionaryConstraint

Contains a value that constrains the type of dictionary keys.

MLDictionaryFeatureProvider

An implementation of IMLFeatureProvider that is backed by a NSDictionary.

MLFeatureDescription

A developer-meaningful description of a MLModel feature.

MLFeatureValue

An immutable value and MLFeatureType for a feature.

MLImageConstraint

Contains constraints for an image feature.

MLImageSize

Describes one acceptable image size for the CoreML model inputs.

MLImageSizeConstraint

Description of the constraint on image sizes for a CoreML model.

MLModel

Encapsulates a trained machine-learning model.

MLModelConfiguration
MLModelDescription

A developer-meaningful description of the MLModel.

MLModelErrorExtensions

Extension methods for the CoreML.MLModelError enumeration.

MLModelMetadata

A DictionaryContainer that holds metadata related to a MLModel.

MLMultiArray

Represents an efficient multi-dimensional array.

MLMultiArrayConstraint

Contains constraints for a multidimensional array feature.

MLMultiArrayShapeConstraint

Describes the constraints on the shape of the multidimensional array allowed by the model.

MLPredictionOptions

Contains a value that indicates whether to restrict prediction computations to the CPU.

MLSequence

Encodes a sequence as a single input.

MLSequenceConstraint

A constraint on sequences of features.

Interfaces

IMLBatchProvider

Interface defining the protocol for providing data in batches to the model.

IMLCustomLayer

Interface defining methods necessary for a custom model layer.

IMLCustomModel

Interface defining a custom CoreML model.

IMLFeatureProvider

An interface that defines input or output features and allows access to their values.

Enums

MLComputeUnits
MLFeatureType

Enumerates the kinds of features supported by CoreML.

MLImageSizeConstraintType

Enumerates the form of a MLImageSizeConstraint.

MLModelError

Enumerates errors that may occur in the use of Core ML.

MLMultiArrayDataType

Enumerates the types of values stored in a MLMultiArray.

MLMultiArrayShapeConstraintType

Enumerates the form of a MLMultiArrayShapeConstraint.

Remarks

The CoreML namespace, introduced in iOS 11, allows runtime querying of a broad variety of machine-learning models produced by frameworks such as scipy-learn, TensorFlow, and Azure Custom Vision cognitive services. CoreML does not support on-device modification of models or weights, however it does support loading a model from a URL, so developers could use that to download updated models.

CoreML relies on a "model" that is distributed as a single .mlmodel file. This model is compiled into a usable form by either using integrated tools in Xcode, Xamarin Studio, or at the command-line. At the command line, an .mlmodel file can be compiled with xcrun coremlcompiler compile model.mlmodel outputfolder . A compiled model takes the form of a directory called modelname.mlmodelc . The model is then loaded at runtime with code similar to:


var bundle = NSBundle.MainBundle;
var assetPath = bundle.GetUrlForResource("MarsHabitatPricer", "mlmodelc");
NSError mlErr;
model = MLModel.Create(assetPath, out mlErr);

Models in CoreML are fairly "black box" and do not have an extensive API. Rather, the iOS developer must know the input and output names and types the model expects. For instance, an image-recognition model might expect a CVPixelBuffer of size 227x227 identified as "image" and might have two outputs: a string identified as "classLabel" and a NSDictionary with NSString keys and double values in the range [0,1] representing the confidence of the prediction.

Developers must convert their native representations into CoreML-compatible instances of IMLFeatureProvider . The easiest way to do this is with a MLDictionaryFeatureProvider :


CVPixelBuffer pixelBuffer = // from image or video stream
var imageValue = MLFeatureValue.Create(pixelBuffer);

var inputs = new NSDictionary<NSString, NSObject>(new NSString("image"), imageValue);

NSError error, error2; var inputFp = new MLDictionaryFeatureProvider(inputs, out error); if(error != null) { ErrorOccurred(this, new EventArgsT<string>(error.ToString())); return; } var outFeatures = model.GetPrediction(inputFp, out error2); if(error2 != null) { ErrorOccurred(this, new EventArgsT<string>(error2.ToString())); return; }

var predictionsDictionary = outFeatures.GetFeatureValue("classLabelProbs").DictionaryValue;

A more complex, but more flexible, way is to implement IMLFeatureProvider :


public class MarsHabitatPricerInput : NSObject, IMLFeatureProvider
{
	public double SolarPanels { get; set; }
	public double Greenhouses { get; set; }
	public double Size { get; set; }
public NSSet&lt;NSString&gt; FeatureNames =&gt; new NSSet&lt;NSString&gt;(new NSString("solarPanels"), new NSString("greenhouses"), new NSString("size"));

public MLFeatureValue GetFeatureValue(string featureName)
{
	switch (featureName)
	{
		case "solarPanels":
			return MLFeatureValue.Create(SolarPanels);
		case "greenhouses":
			return MLFeatureValue.Create(Greenhouses);
		case "size":
			return MLFeatureValue.Create(Size);
		default:
			return MLFeatureValue.Create(0);
	}
}

}

Getting a prediction occurs synchronously, with a call to GetPrediction :


NSError prErr;
IMLFeatureProvider outFeatures = model.GetPrediction(pricerInput, out prErr);
double result = outFeatures.GetFeatureValue("price").DoubleValue;

CoreML currently supports:

TypeVariantsProduced by:
Neural networksConvolutional, feed-forward, recurrent Caffe, Keras, Azure Custom Vision
Tree ensemblesRandom forests, boosted trees, decision trees scikit-learn, XGBoost
SVMsScalar and multiclass scikit-learn, LIBSVM
Generalized linear modelsLinear and logistic regressionscikit-learn
Pipeline modelsSequentially chained models scikit-learn

Apple has open-sourced (3-clause BSD License) python tools to create CoreML models: https://pypi.python.org/pypi/coremltools