Access the MLflow tracking server from outside Azure Databricks
You may wish to log to the MLflow tracking server from your own applications or from the MLflow CLI.
This article describes the required configuration steps. Start by installing MLflow and configuring your credentials (Step 1). You can then either configure an application (Step 2) or configure the MLflow CLI (Step 3).
For information on how to launch and log to an open-source tracking server, see the open source documentation.
Step 1: Configure your environment
If you don’t have a Azure Databricks Workspace, you can try Databricks for free.
To configure your environment to access your Azure Databricks hosted MLflow tracking server:
- Install MLflow using
pip install mlflow.
- Configure authentication. Do one of:
Generate a REST API token and create a credentials file using
databricks configure --token.
Specify credentials via environment variables:
# Configure MLflow to communicate with a Databricks-hosted tracking server export MLFLOW_TRACKING_URI=databricks # Specify the workspace hostname and token export DATABRICKS_HOST="..." export DATABRICKS_TOKEN="..."
Step 2: Configure MLflow applications
Configure MLflow applications to log to Azure Databricks by setting the tracking URI to
databricks://<profileName>, if you specified a profile name via
--profile while creating your credentials file. For example, you can achieve this by setting the
MLFLOW_TRACKING_URI environment variable to “databricks”.
Configure the MLflow CLI to communicate with an Azure Databricks tracking
server with the
MLFLOW_TRACKING_URI environment variable. For example, to create an experiment
using the CLI with the tracking URI
# Replace <your-username> with your Databricks username export MLFLOW_TRACKING_URI=databricks mlflow experiments create -n /Users/<your-username>/my-experiment