GPU accelerated ML training
This documentation covers setting up GPU accelerated machine learning (ML) training scenarios for the Windows Subsystem for Linux (WSL) and native Windows.
This functionality supports both professional and beginner scenarios. Below you'll find pointers to step-by-step guides on how to get your system set up depending on your level of expertise in ML, your GPU vendor, and the software library that you intend to use.
NVIDIA CUDA in WSL
If you're a professional data scientist who uses a native Linux environment day-to-day for inner-loop ML development and experimentation, and you have an NVIDIA GPU, then we recommend setting up NVIDIA CUDA in WSL.
TensorFlow with DirectML
If you're a student, beginner, or professional who uses TensorFlow and are looking for a framework that works across the breadth of DirectX 12 capable GPUs, then we recommend setting up the TensorFlow with DirectML package. This package accelerates workflows on AMD, Intel, and NVIDIA GPUs.
If you're more familiar with a native Linux environment, then we recommend running TensorFlow with DirectML inside WSL.
If you're more familiar with Windows, then we recommend running TensorFlow with DirectML on native Windows.
PyTorch with DirectML
If you're a student, beginner, or professional who uses PyTorch and are looking for a framework that works across the breadth of DirectX 12 capable GPUs, then we recommend setting up the PyTorch with DirectML package. This package accelerates workflows on AMD, Intel, and NVIDIA GPUs.
If you're more familiar with a native Linux environment, then we recommend running PyTorch with DirectML inside WSL.
If you're more familiar with Windows, then we recommend running PyTorch with DirectML on native Windows.
Next steps
피드백
https://aka.ms/ContentUserFeedback
출시 예정: 2024년 내내 콘텐츠에 대한 피드백 메커니즘으로 GitHub 문제를 단계적으로 폐지하고 이를 새로운 피드백 시스템으로 바꿀 예정입니다. 자세한 내용은 다음을 참조하세요.다음에 대한 사용자 의견 제출 및 보기