Databricks Runtime for Machine Learning (Databricks Runtime ML) automates the creation of a cluster optimized for machine learning. Databricks Runtime ML clusters include the most popular machine learning libraries, such as TensorFlow, PyTorch, Keras, and XGBoost, and also include libraries required for distributed training such as Horovod. Using Databricks Runtime ML speeds up cluster creation and ensures that the installed library versions are compatible.
Databricks Runtime ML is built on Databricks Runtime. For example, Databricks Runtime 6.5 ML is built on Databricks Runtime 6.5. The libraries included in the base Databricks Runtime are listed in the Databricks runtime release notes.
This tutorial is designed for new users of Databricks Runtime ML. It takes about 10 minutes to work through, and shows a complete end-to-end example of loading tabular data, training a model, distributed hyperparameter tuning, and model inference. It also illustrates how to use the MLflow API and MLflow Model Registry.
Databricks tutorial notebook
Library utilities are not available in Databricks Runtime ML.
The Databricks Runtime ML includes a variety of popular ML libraries. The libraries are updated with each release to include new features and fixes.
Azure Databricks has designated a subset of the supported libraries as top-tier libraries. For these libraries, Azure Databricks provides a faster update cadence, updating to the latest package releases with each runtime release (barring dependency conflicts). Azure Databricks also provides advanced support, testing, and embedded optimizations for top-tier libraries.
For a full list of top-tier and other provided libraries, see the following articles for each available runtime:
- Databricks Runtime 7.2 ML (Beta)
- Databricks Runtime 7.1 ML
- Databricks Runtime 7.0 ML
- Databricks Runtime 6.6 ML
- Databricks Runtime 6.5 ML
- Databricks Runtime 6.4 ML
- Databricks Runtime 5.5 LTS ML
How to use Databricks Runtime ML
In addition to the pre-installed libraries, Databricks Runtime ML differs from Databricks Runtime in the cluster configuration and in how you manage Python packages.
Create a cluster using Databricks Runtime ML
When you create a cluster, select a Databricks Runtime ML version from the Databricks Runtime Version drop-down. Both CPU and GPU-enabled ML runtimes are available.
If you select a GPU-enabled ML runtime, you are prompted to select a compatible Driver Type and Worker Type. Incompatible instance types are grayed out in the drop-downs. GPU-enabled instance types are listed under the GPU-Accelerated label.
Libraries in your workspace that automatically install into all clusters can conflict with the libraries included in Databricks Runtime ML. Before you create a cluster with Databricks Runtime ML, clear the Install automatically on all clusters checkbox for conflicting libraries.
In Databricks Runtime ML the Conda package manager is used to install Python packages. All Python packages are installed inside a single environment:
/databricks/python2 on clusters using Python 2 and
/databricks/python3 on clusters using Python 3. Switching (or activating) Conda environments is not supported.
For information on managing Python libraries, see Libraries.
Databricks Runtime ML includes tools to automate the model development process and help you efficiently find the best performing model.
- Managed MLFlow manages the end-to-end model lifecycle, including tracking experimental runs, deploying and sharing models, and maintaining a centralized model registry.
- Hyperopt, augmented with the
SparkTrialsclass, automates and distributes ML model parameter tuning.
By using this version of Databricks Runtime, you agree to the terms and conditions outlined in the NVIDIA End User License Agreement (EULA) with respect to the CUDA, cuDNN, and Tesla libraries, and the NVIDIA End User License Agreement (with NCCL Supplement) for the NCCL library.