Data

This section shows how to work with data in Azure Databricks. You can:

  • Create tables directly from imported data. Table schema is stored in the default Azure Databricks internal metastore and you can also configure and use external metastores.
  • Use a wide variety of Apache Spark data sources.
  • Import data into Databricks File System (DBFS), a distributed file system mounted into an Azure Databricks workspace and available on Azure Databricks clusters and use the DBFS CLI, DBFS API, Databricks file system utilities (dbutils.fs), Spark APIs, and local file APIs to access the data.

See the following sections: