Databricks Asset Bundles development workflow
This article describes the sequence of work tasks for Databricks Asset Bundle development. See What are Databricks Asset Bundles?
To create, validate, deploy, and run bundles, complete the following steps.
Step 1: Create a bundle
There are three ways to begin creating a bundle:
- Use the default bundle template.
- Use a custom bundle template.
- Create a bundle manually.
Use a default bundle template
To use a Azure Databricks default bundle template to create a starter bundle that you can then customize further, use Databricks CLI version 0.205 or above to run the bundle init
command, which allows you to choose from a list of available templates:
databricks bundle init
You can view the source for the default bundle templates in the databricks/cli and databricks/mlops-stacks Github public repositories.
Skip ahead to Step 2: Populate the bundle configuration files.
Use a custom bundle template
To use a bundle template other than the Azure Databricks default bundle template, you must know the local path or the URL to the remote bundle template location. Use Databricks CLI version 0.205 or above to run the bundle init
command as follows:
databricks bundle init <project-template-local-path-or-url>
For more information about this command, see Databricks Asset Bundle templates. For information about a specific bundle template, see the bundle template provider’s documentation.
Skip ahead to Step 2: Populate the bundle configuration files.
Create a bundle manually
To create a bundle manually instead of by using a bundle template, create a project directory on your local machine, or an empty repository with a third-party Git provider.
In your directory or repository, create one or more bundle configuration files as input. These files are expressed in YAML format. There must be at minimum one (and only one) bundle configuration file named databricks.yml
. Additional bundle configuration files must be referenced in the include
mapping of the databricks.yml
file.
To more easily and quickly create YAML files that conform to the Databricks Asset Bundle configuration syntax, you can use a tool such as Visual Studio Code, PyCharm Professional, or IntelliJ IDEA Ultimate that provide support for YAML files and JSON schema files, as follows:
Visual studio code
Add YAML language server support to Visual Studio Code, for example by installing the YAML extension from the Visual Studio Code Marketplace.
Generate the Databricks Asset Bundle configuration JSON schema file by using Databricks CLI version 0.205 or above to run the
bundle schema
command and redirect the output to a JSON file. For example, generate a file namedbundle_config_schema.json
within the current directory, as follows:databricks bundle schema > bundle_config_schema.json
Use Visual Studio Code to create or open a bundle configuration file within the current directory. This file must be named
databricks.yml
.Add the following comment to the beginning of your bundle configuration file:
# yaml-language-server: $schema=bundle_config_schema.json
Note
In the preceding comment, if your Databricks Asset Bundle configuration JSON schema file is in a different path, replace
bundle_config_schema.json
with the full path to your schema file.Use the YAML language server features that you added earlier. For more information, see your YAML language server’s documentation.
Pycharm professional
Generate the Databricks Asset Bundle configuration JSON schema file by using Databricks CLI version 0.205 or above to run the
bundle schema
command and redirect the output to a JSON file. For example, generate a file namedbundle_config_schema.json
within the current directory, as follows:databricks bundle schema > bundle_config_schema.json
Configure PyCharm to recognize the bundle configuration JSON schema file, and then complete the JSON schema mapping, by following the instructions in Configure a custom JSON schema.
Use PyCharm to create or open a bundle configuration file. This file must be named
databricks.yml
. As you type, PyCharm checks for JSON schema syntax and formatting and provides code completion hints.
Intellij idea ultimate
Generate the Databricks Asset Bundle configuration JSON schema file by using Databricks CLI version 0.205 or above to run the
bundle schema
command and redirect the output to a JSON file. For example, generate a file namedbundle_config_schema.json
within the current directory, as follows:databricks bundle schema > bundle_config_schema.json
Configure IntelliJ IDEA to recognize the bundle configuration JSON schema file, and then complete the JSON schema mapping, by following the instructions in Configure a custom JSON schema.
Use IntelliJ IDEA to create or open a bundle configuration file. This file must be named
databricks.yml
. As you type, IntelliJ IDEA checks for JSON schema syntax and formatting and provides code completion hints.
Step 2: Populate the bundle configuration files
Bundle configuration files define your Azure Databricks workflows by specifying settings such as workspace details, artifact names, location names, job details, and pipeline details. For detailed information about bundle configuration files, see Databricks Asset Bundle configurations.
Tip
You can use the bundle generate
command to autogenerate bundle configuration for an existing resource, then use bundle deployment bind
to link the bundle configuration to the resource in the workspace. See Generate a bundle configuration file and Bind bundle resources.
Step 3: Validate the bundle configuration files
Before you deploy artifacts or run a job or pipeline, you should make sure that your bundle configuration files are syntactically correct. To do this, run the bundle validate
command from the same directory as the bundle configuration file. This directory is also known as the bundle root.
databricks bundle validate
If the configuration validation was successful, this command outputs a JSON payload representing your bundle.
Step 4: Deploy the bundle
Before you deploy the bundle, make sure that the remote workspace has workspace files enabled. See What are workspace files?.
To deploy any specified local artifacts to the remote workspace, run the bundle deploy
command from the bundle root. If no command options are specified, the Databricks CLI uses the default target as declared within the bundle configuration files:
databricks bundle deploy
Tip
You can run databricks bundle
commands outside of the bundle root by setting the BUNDLE_ROOT
environment variable. If this environment variable is not set, databricks bundle
commands attempt to find the bundle root by searching within the current working directory.
To deploy the artifacts within the context of a specific target, specify the -t
(or --target
) option along with the target’s name as declared within the bundle configuration files. For example, for a target declared with the name dev
:
databricks bundle deploy -t dev
Step 5: Run the bundle
To run a specific job or pipeline, run the bundle run
command from the bundle root, specifying the job or pipeline key declared within the bundle configuration files. The resource key is the top-level element of the resource’s YAML block. If you do not specify a job or pipeline key, you will be prompted to select a resource to run from a list of available resources. If the -t
option is not specified, the default target as declared within the bundle configuration files is used. For example, to run a job with the key hello_job
within the context of the default target:
databricks bundle run hello_job
To run a job with a key hello_job
within the context of a target declared with the name dev
:
databricks bundle run -t dev hello_job
Step 6: Destroy the bundle
If you want to delete jobs, pipelines, and artifacts that were previously deployed, run the bundle destroy
command from the bundle root. This command deletes all previously-deployed jobs, pipelines, and artifacts that are defined in the bundle configuration files:
databricks bundle destroy
By default, you are prompted to confirm permanent deletion of the previously-deployed jobs, pipelines, and artifacts. To skip these prompts and perform automatic permanent deletion, add the --auto-approve
option to the bundle destroy
command.
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for