DataGrip integration with Azure Databricks

DataGrip is an integrated development environment (IDE) for database developers that provides a query console, schema navigation, explain plans, smart code completion, real-time analysis and quick fixes, refactorings, version control integration, and other features.

This article describes how to use your local development machine to install, configure, and use DataGrip to work with databases in Azure Databricks.

Note

This article was tested with macOS, Databricks JDBC Driver version 2.6.17, and DataGrip version 2021.1.

Requirements

Before you install DataGrip, your local development machine must meet the following requirements:

  • A Linux, macOS, or Windows operating system.
  • Download the Databricks JDBC Driver onto your local development machine, extracting the SparkJDBC42.jar file from the downloaded SimbaSparkJDBC42.zip file.
  • An Azure Databricks cluster or SQL endpoint to connect DataGrip to.

Step 1: Install DataGrip

Download and install DataGrip.

  • Linux: Download the .zip file, extract its contents, and then follow the instructions in the Install-Linux-tar.txt file.
  • macOS: Download and run the .dmg file.
  • Windows: Download and run the .exe file.

For more information, see Install DataGrip on the DataGrip website.

Step 2: Configure the Databricks JDBC Driver for DataGrip

Set up DataGrip with information about the Databricks JDBC Driver that you downloaded earlier.

  1. Start DataGrip.
  2. Click File > Data Sources.
  3. In the Data Sources and Drivers dialog box, click the Drivers tab.
  4. Click the + (Driver) button to add a driver.
  5. For Name, enter Databricks.
  6. On the General tab, in the Driver Files list, click the + (Add) button.
  7. Click Custom JARs.
  8. Browse to and select the SparkJDBC42.jar file that you extracted earlier, and then click Open.
  9. For Class, select com.simba.spark.jdbc.Driver.
  10. Click OK.

Step 3: Connect DataGrip to your Azure Databricks databases

Use DataGrip to connect to the cluster or SQL endpoint that you want to use to access the databases in your Azure Databricks workspace.

  1. In DataGrip, click File > Data Sources.

  2. On the Data Sources tab, click the + (Add) button.

  3. Select the Databricks driver that you added in the preceding step.

  4. On the General tab, for URL, enter the value of the JDBC URL field for your Azure Databricks resource as follows:

    Cluster

    1. Find the JDBC URL field value on the JDBC/ODBC tab within the Advanced Options area for your cluster. The JDBC URL should look similar to this one:

      jdbc:spark://adb-1234567890123456.7.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/1234567890123456/1234-567890-reef123;AuthMech=3;UID=token;PWD=<personal-access-token>
      
    2. Replace <personal-access-token> with your personal access token for the Azure Databricks workspace.

      Tip

      If you do not want to store your personal access token on your local development machine, omit UID=token;PWD=<personal-access-token> from the JDBC URL and, in the Save list, choose Never. You will be prompted for your User (token) and Password (your personal access token) each time you try to connect.

    3. For Name, enter Databricks cluster.

    For more information, see Data sources and drivers dialog on the DataGrip website.

    Sql endpoint

    1. Find the JDBC URL field value on the Connection Details tab for your SQL endpoint. The JDBC URL should look similar to this one:

      jdbc:spark://adb-1234567890123456.7.azuredatabricks.net:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/endpoints/a123456bcde7f890;
      
    2. For User, enter token.

    3. For Password, enter your personal access token.

      Tip

      If you do not want to store your personal access token on your local development machine, leave User and Password blank and, in the Save list, select Never. You will be prompted for your User (the word token) and Password (your personal access token) each time you try to connect.

    4. For Name, enter Databricks SQL endpoint.

    For more information, see Data sources and drivers dialog on the DataGrip website.

  5. Click Test Connection.

    Tip

    You should start your resource before testing your connection. Otherwise the test might take several minutes to complete while the resource starts.

  6. If the connection succeeds, on the Schemas tab, check the boxes for the schemas that you want to be able to access, for example default.

  7. Click OK.

Repeat the instructions in this step for each resource that you want DataGrip to access.

Step 4: Use DataGrip to browse tables

Use DataGrip to access tables in your Azure Databricks workspace.

  1. In DataGrip, in the Database window, expand your resource node, expand the schema you want to browse, and then expand tables.
  2. Double-click a table. The first set of rows from the table are displayed.

Repeat the instructions in this step to access additional tables.

To access tables in other schemas, in the Database window’s toolbar, click the Data Source Properties icon. In the Data Sources and Drivers dialog box, on the Schemas tab, check the box for each additional schema you want to access, and then click OK.

Step 5: Use DataGrip to run SQL statements

Use DataGrip to load the sample diamonds table from the Azure Databricks datasets into the default database in your workspace and then query the table. For more information, see Create a table in _. If you do not want to load a sample table, skip ahead to Next steps.

  1. In DataGrip, in the Database window, with the default schema expanded, click File > New > SQL File.

  2. Enter a name for the file, for example create_diamonds.

  3. In the file tab, enter these SQL statements, which deletes a table named diamonds if it exists, and then creates a table named diamonds based on the contents of the CSV file within the specified Databricks File System (DBFS) mount point:

    DROP TABLE IF EXISTS diamonds;
    
    CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", header "true");
    
  4. Select the DROP TABLE statement.

  5. On the file tab’s toolbar, click the Execute icon.

  6. Select DROP TABLE IF EXISTS diamonds; CREATE TABLE diamon. .. from the drop-down list.

    Tip

    To change what happens when you click the Execute icon, select Customize in the drop-down list.

  7. In the Database window, double-click the diamonds table to see its data. If the diamonds table is not displayed, click the Refresh button in the window’s toolbar.

To delete the diamonds table:

  1. In DataGrip, in the Database window’s toolbar, click the Jump to Query Console button.

  2. Select console (Default).

  3. In the console tab, enter this SQL statement:

    DROP TABLE diamonds;
    
  4. Select the DROP TABLE statement.

  5. On the console tab’s toolbar, click the Execute icon. The diamonds table disappears from the list of tables. If the diamonds table does not disappear, click the Refresh button in the Database window’s toolbar.

Next steps

Additional resources