The transaction log for a Delta table contains versioning information that supports Delta Lake evolution. Delta Lake tracks minimum reader and writer versions separately.
Delta Lake guarantees backward compatibility. A higher version of Databricks Runtime is always able to read data that was written by a lower version.
Delta Lake will occasionally break forward compatibility. Lower versions of Databricks Runtime may not be able to read and write data that was written by a higher version of Databricks Runtime. If you try to read and write to a table with a version of Databricks Runtime that is too low, you’ll get an error telling you that you need to upgrade.
When creating a table, Delta Lake chooses the minimum required protocol version based on table characteristics such as the schema or table properties. You can also set the default protocol versions by setting the SQL configurations:
spark.databricks.delta.properties.defaults.minWriterVersion = 2(default)
spark.databricks.delta.properties.defaults.minReaderVersion = 1(default)
To upgrade a table to a newer protocol version, use the
from delta.tables import DeltaTable delta = DeltaTable.forPath(spark, "path_to_table") # or DeltaTable.forName delta.upgradeTableProtocol(1, 3) # upgrades to readerVersion=1, writerVersion=3
import io.delta.tables.DeltaTable val delta = DeltaTable.forPath(spark, "path_to_table") // or DeltaTable.forName delta.upgradeTableProtocol(1, 3) // upgrades to readerVersion=1, writerVersion=3
Protocol upgrades are irreversible, therefore we recommend you upgrade specific tables only when needed, such as to opt-in to new features in Delta Lake.