Export your model for use with mobile devices
Custom Vision Service allows classifiers to be exported to run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
Custom Vision Service supports the following exports:
- Tensorflow for Android.
- CoreML for iOS11.
- ONNX for Windows ML, Android, and iOS.
- Vision AI Developer Kit.
- A Docker container for Windows, Linux, or ARM architecture. The container includes a Tensorflow model and service code to use the Custom Vision API.
Custom Vision Service only exports compact domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices. Classifiers built with a compact domain may be slightly less accurate than a standard domain with the same amount of training data.
For information on improving your classifiers, see the Improving your classifier document.
Convert to a compact domain
The steps in this section only apply if you have an existing model that is not set to compact domain.
To convert the domain of an existing model, take the following steps:
On the Custom vision website, select the Home icon to view a list of your projects.
Select a project, and then select the Gear icon in the upper right of the page.
In the Domains section, select one of the compact domains. Select Save Changes to save the changes.
For Vision AI Dev Kit, the project must be created with the General (Compact) domain, and you must specify the Vision AI Dev Kit option under the Export Capabilities section.
From the top of the page, select Train to retrain using the new domain.
Export your model
To export the model after retraining, use the following steps:
Go to the Performance tab and select Export.
If the Export entry is not available, then the selected iteration does not use a compact domain. Use the Iterations section of this page to select an iteration that uses a compact domain, and then select Export.
Select your desired export format, and then select Export to download the model.
Integrate your exported model into an application by exploring one of the following articles or samples:
- Use your Tensorflow model with Python
- Use your ONNX model with Windows Machine Learning
- See the sample for CoreML model in an iOS application for real-time image classification with Swift.
- See the sample for Tensorflow model in an Android application for real-time image classification on Android.
- See the sample for CoreML model with Xamarin for real-time image classification in a Xamarin iOS app.