Optimize Performance with File Management
To improve query speed, Delta Lake on Azure Databricks supports the ability to optimize the layout of data stored in cloud storage. Delta Lake on Azure Databricks supports two layout algorithms: bin-packing and Z-Ordering.
This article describes how to run the optimization commands and how the two layout algorithms work and has an example notebook that demonstrates the benefits of optimization. This article also describes how to clean up stale table snapshots. The FAQ explains why optimization is not automatic and includes recommendations for how often to run optimize commands.
For reference information on Delta Lake on Azure Databricks SQL optimization commands, see SQL.
Delta Lake on Azure Databricks can improve the speed of read queries from a table by coalescing small files into larger ones. You trigger compaction by running the
If you have a large amount of data and only want to optimize a subset of it, you can specify an optional partition predicate using
OPTIMIZE events WHERE date >= '2017-01-01'
- Bin-packing optimization is idempotent, meaning that if it is run twice on the same dataset, the second instance has no effect.
- Bin-packing aims to produce evenly-balanced data files with respect to their size on disk, but not necessarily number of tuples per file. However, the two measures are most often correlated.
Readers of Delta tables use snapshot isolation, which means that they are not interrupted when
OPTIMIZE removes unnecessary files from the transaction log.
OPTIMIZE makes no data related changes to the table, so a read before and after an
OPTIMIZE has the same results. Performing
OPTIMIZE on a table that is a streaming source does not affect any current or future streams that treat this table as a source.
OPTIMIZE returns the file statistics (min, max, total, and so on) for the files removed and the files added by the operation. Optimize stats also contains the Z-Ordering statistics, the number of batches, and partitions optimized.
Optimize statistics are available in Databricks Runtime 6.0 and above.
Data skipping information is collected automatically when you write data into a Delta table. Delta Lake on Azure Databricks takes advantage of this information (minimum and maximum values) at query time to provide faster queries. You do not need to configure data skipping - the feature is activated whenever applicable. However, its effectiveness depends on the layout of your data. For best results, apply Z-Ordering.
For an example of the benefits of Delta Lake on Azure Databricks data skipping and Z-Ordering, see the notebooks in the next section. By default Delta Lake on Azure Databricks collects statistics on the first 32 columns defined in your table schema. You can change this value using the table property
dataSkippingNumIndexedCols. Adding more columns to collect statistics would add additional overhead as you write files.
Collecting statistics on long strings is an expensive operation, therefore you can consider tweaking the table property
dataSkippingNumIndexedCols to avoid collecting statistics on long strings, or change the ordering of columns in the schema using
ALTER TABLE CHANGE COLUMN. For the purposes of collecting statistics, each field within a nested column is considered as an individual column.
You can read more on this article in the blog post: Processing Petabytes of Data in Seconds with Databricks Delta.
Z-Ordering is a technique to colocate related information in the same set of files. This co-locality is automatically used by Delta Lake on Azure Databricks data-skipping algorithms to dramatically reduce the amount of data that needs to be read. To Z-Order data, you specify the columns to order on in the
ZORDER BY clause:
OPTIMIZE events WHERE date >= current_timestamp() - INTERVAL 1 day ZORDER BY (eventType)
If you expect a column to be commonly used in query predicates and if that column has high cardinality (that is, a large number of distinct values), then use
You can specify multiple columns for
ZORDER BY as a comma-separated list. However, the effectiveness of the locality drops with each additional column.
Z-Ordering on columns that do not have statistics collected on them would be ineffective and a waste of resources as data skipping requires column-local stats such as min, max, and count. You can configure statistics collection on certain columns by re-ordering columns in the schema and/or increasing the number of columns to collect statistics on. See the section Data skipping for more details.
Prior to Databricks Runtime 5.0, Z-Ordering was not an idempotent operation, simply rewriting all data matching the given filter every time the command was run. Starting with Databricks Runtime 5.0, Z-Ordering is idempotent too, just like bin-packing.
Z-Ordering aims to produce evenly-balanced data files with respect to the number of tuples, but not necessarily data size on disk. The two measures are most often correlated, but there can be situations when that is not the case, leading to skew in optimize task times.
For example, if you
ZORDER BYdate and your most recent records are all much wider (for example longer arrays or string values) than the ones in the past, it is expected that the
OPTIMIZEjob’s task durations will be skewed, as well as the resulting file sizes. This is, however, only a problem for the
OPTIMIZEcommand itself; it should not have any negative impact on subsequent queries.
For an example of the benefits of optimization, see the following notebooks:
- Optimization Examples
At the beginning of each query Delta tables auto-update to the latest version of the table. This process can be observed in notebooks when the command status reports:
Updating the Delta table's state. However, when running historical analysis on a table, you may not necessarily need up-to-the-last-minute data, especially for tables where streaming data is being ingested frequently. In these cases, queries can be run on stale snapshots of your Delta table. This can lower your latency in getting results from queries that you execute.
You can configure how stale your table can go by setting the Spark session configuration
spark.databricks.delta.stalenessLimit using a time string, for example
1d for 1 hour, 15 minutes, and 1 day respectively. This configuration is a session specific configuration, therefore won’t affect how other users are accessing this table from different notebooks, jobs, or BI tools. In addition, this setting doesn’t prevent your table from updating. It just prevents a query from having to wait for the table to update. The update still occurs in the background, and will share resources fairly across the cluster. If the staleness limit is exceeded, then the query will block on the table state update.
OPTIMIZE operation starts up many Spark jobs in order to optimize the file sizing via compaction (and optionally perform Z-Ordering). Since much of what
OPTIMIZE does is compact small files, you must first accumulate many small files before this operation has an effect. Therefore, the
OPTIMIZE operation is not run automatically.
OPTIMIZE, especially with
ZORDER, is an expensive operation in time and resources. If Databricks ran
OPTIMIZE automatically or waited to write out data in batches, it would remove the ability to run low-latency Delta Lake streams (where a Delta table is the source). Many customers have Delta tables that are never optimized because they only stream data from these tables, obviating the query benefits that
OPTIMIZE would provide.
Lastly, Delta Lake automatically collects statistics about the files that are written to the table (whether through an
OPTIMIZE operation or not). This means that reads from Delta tables leverage this information whether or not the table or a partition has had the
OPTIMIZE operation run on it.
How often should I run
When you choose how often to run
OPTIMIZE, there is a trade-off between performance and cost. You should run
OPTIMIZE more often if you want better end-user query performance (necessarily at a higher cost because of resource usage). You should run it less often if you want to optimize cost.
We recommend you start by running
OPTIMIZE on a daily basis. Then modify your job from there.
What’s the best instance type to run
OPTIMIZE (bin-packing and Z-Ordering) on?
Both operations are CPU intensive operations doing large amounts of Parquet decoding and encoding.
For these workloads we recommend the F or Fsv2 series.