GPU accelerated ML training
This documentation covers what is currently supported by the GPU accelerated machine learning (ML) training preview for the Windows Subsystem for Linux (WSL) and native Windows.
This preview supports both professional and beginner scenarios. Below you will find pointers to step-by-step guides on how to get your system setup depending on your level of expertise in ML, GPU vendor, and the software library you intend to use.
The following features are in preview, and are subject to change.
If you’re a professional data scientist that uses a native Linux environment day-to-day for inner-loop ML development and experimentation, and you have an NVIDIA GPU, we recommend setting up the NVIDIA CUDA preview in WSL 2.
Students and beginners
If you’re a student or beginner looking to start building your knowledge in the ML space, we recommend setting up the TensorFlow with DirectML backend package. This package currently accelerates workflows on AMD and Intel GPUs. Support for NVIDIA GPUs is coming soon.
For those more familiar with a native Linux environment that are getting started with ML workflows, we recommend running the TensorFlow with DirectML package inside WSL 2.
For those more familiar with Windows that are getting started with ML workflows, we recommend setting up the TensorFlow with DirectML package on native Windows.