Windows Machine Learning
With Windows ML, you can integrate trained machine learning models in your Windows apps.
Windows ML allows you to use trained machine learning models in your Windows apps (C# and C++). The Windows ML inference engine evaluates trained models locally on Windows devices, removing concerns of connectivity, bandwidth, and data privacy. Hardware optimizations for CPU and GPU additionally enable high performance for quick evaluation results.
For the latest Windows ML features and fixes, see our release notes.
To build apps with Windows ML, you'll:
- Get a trained ONNX model, or convert models trained in other ML frameworks into ONNX with WinMLTools.
- Add the ONNX model file to your app.
- Integrate the model into your app's code.
- Run on any Windows device!
To see Windows ML in action, you can try out the sample apps in the Windows-Machine-Learning repo on Github. To learn more about using Windows ML, take a look through our documentation.
Windows ML is a pre-released product which may be substantially modified before it’s commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
To ask or answer technical questions about Windows ML, please use Stack Overflow.
To report a bug, please file an issue on our GitHub.
To request a feature, please head over to Windows Developer Feedback.