Azure Databricks Quickstart
This quickstart gets you going with Azure Databricks: you create a cluster and a notebook, create a table from a dataset, query the table, and display the query results.
Requirements
You have launched an Azure Databricks workspace. See Try Azure Databricks.
Step 1: Orient yourself to the Databricks UI
From the sidebar at the left and the Common Tasks list on the home page, you access fundamental Azure Databricks entities: Workspace, clusters, tables, notebooks, jobs, and libraries. The Workspace is the special root folder that stores your Azure Databricks assets, such as notebooks and libraries, and the data that you import.
To get help, click the question icon at the top right-hand corner.
Step 2: Create a cluster
A cluster is a collection of Azure Databricks computation resources. To create a cluster:
In the sidebar, click the Clusters button
.
On the Clusters page, click Create Cluster.
On the Create Cluster page, specify the cluster name QS and select 5.4 (Scala 2.11, Spark 2.4.3) in the Databricks Runtime Version drop-down.
Click Create Cluster.
Step 3: Create a notebook
A notebook is a collection of cells that run computations on an Apache Spark cluster. To create a notebook in the Workspace:
In the sidebar, click the Workspace button
.
In the Workspace folder, select
Create > Notebook.
On the Create Notebook dialog, enter a name and select SQL in the Language drop-down. This selection determines the primary language of the notebook.
Click Create. The notebook opens with an empty cell at the top.
Step 4: Create a table
Create a table using data from a sample CSV data file available in Azure Databricks Datasets, a collection of datasets mounted to Databricks File System (DBFS), a distributed file system installed on Azure Databricks clusters. You have two options for creating the table.
Option 1: Create a Spark table from the CSV data
Use this option if you want to get going quickly, and you only need standard levels of performance. Copy and paste this code snippet into a notebook cell:
DROP TABLE IF EXISTS diamonds;
CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", header "true")
Option 2: Write the CSV data to Delta Lake format and create a Delta table
Delta Lake offers a powerful transactional storage layer that enables fast reads and other benefits. Delta Lake format consists of Parquet files plus a transaction log. Use this option to get the best performance on future operations on the table.
Read the CSV data into a DataFrame and write out in Delta Lake format. This command uses a Python language magic command, which allows you to interleave commands in languages other than the notebook primary language (SQL). Copy and paste this code snippet into a notebook cell:
%python diamonds = spark.read.csv("/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", header="true", inferSchema="true") diamonds.write.format("delta").save("/mnt/delta/diamonds")
Create a Delta table at the stored location. Copy and paste this code snippet into a notebook cell:
DROP TABLE IF EXISTS diamonds; CREATE TABLE diamonds USING DELTA LOCATION '/mnt/delta/diamonds/'
Run cells by pressing SHIFT + ENTER. The notebook automatically attaches to the cluster you created in Step 2 and runs the command in the cell.
Step 5: Query the table
Run a SQL statement to query the table for the average diamond price by color.
To add a cell to the notebook, mouse over the cell bottom and click the
icon.
Copy this snippet and paste it in the cell.
SELECT color, avg(price) AS price FROM diamonds GROUP BY color ORDER BY COLOR
Press SHIFT + ENTER. The notebook displays a table of diamond color and average price.
Step 6: Display the data
Display a chart of the average diamond price by color.
Click the Bar chart icon
.
Click Plot Options.
Drag color into the Keys box.
Drag price into the Values box.
In the Aggregation drop-down, select AVG.
Click Apply to display the bar chart.
What’s next
We’ve now covered the basics of Azure Databricks, including creating a cluster and a notebook, running SQL commands in the notebook, and displaying results.
To dive into various Apache Spark articles, see Apache Spark Getting Started.
To read more about the primary tools you use and tasks you can perform with the Azure Databricks workspace, see:
To see some interesting applications of the Azure Databricks workspace, watch these videos:
Feedback
Loading feedback...