SQL Server 2019 Big Data Clusters release notes
The following release notes apply to SQL Server 2019 Big Data Clusters. This article is broken into sections for each release. Each release has a link to a support article describing the CU changes as well as links to the Linux package downloads. The article also lists known issues for the most recent releases of SQL Server Big Data Clusters (BDC).
This section explains platforms that are supported with SQL Server Big Data Clusters (BDC).
|Kubernetes||BDC requires Kubernetes version minimum 1.13. See Kubernetes version and version skew support policy for Kubernetes version support policy.|
|Azure Kubernetes Service (AKS)||BDC requires AKS version minimum 1.13.
See Supported Kubernetes versions in AKS for version support policy.
Host OS for Kubernetes
|Red Hat Enterprise Linux||7.3, 7.4, 7.5, 7.6|
SQL Server Editions
|Big Data Cluster edition is determined by the edition of SQL Server master instance. At deployment time Developer edition is deployed by default. You can change the edition after deployment. See Configure SQL Server master instance.|
||Must be same minor version as the server (same as SQL Server master instance).
As of SQL Server 2019 CU2, this version is
|Azure Data Studio||Get the latest build of Azure Data Studio.|
The following table lists the release history for SQL Server 2019 Big Data Clusters.
How to install updates
To install updates, see How to upgrade SQL Server Big Data Clusters.
CU2 (Feb 2020)
Cumulative Update 2 (CU2) release for SQL Server 2019. The SQL Server Database Engine version for this release is 15.0.4003.23.
|Package version||Image tag|
CU1 (Jan 2020)
Cumulative Update 1 (CU1) release for SQL Server 2019. The SQL Server Database Engine version for this release is 15.0.4003.23.
|Package version||Image tag|
GDR1 (Nov 2019)
SQL Server 2019 General Distribution Release 1 (GDR1) - introduces general availability for Big Data Clusters. The SQL Server Database Engine version for this release is 15.0.2070.34.
|Package version||Image tag|
SQL Server 2019 servicing updates
For current information about SQL Server servicing updates, see https://support.microsoft.com/help/4518398.
Deployment with private repository
Issue and customer impact: Upgrade from private repository has specific requirements
Workaround: If you use a private repository to pre-pull the images for deploying or upgrading BDC, ensure that the current build images as well as the target build images are in the private repository. This enables successful rollback, if necessary. Also, if you changed the credentials of the private repository since the original deployment, update the corresponding secret in Kubernetes before you upgrade.
azdatadoes not support updating the credentials through
AZDATA_USERNAMEenvironment variables. Update the secret using
kubectl edit secrets.
Upgrading using different repositories for current and target builds is not supported.
Upgrade may fail due to timeout
Issue and customer impact: An upgrade may fail due to timeout.
The following code shows what the failure might look like:
>azdata.EXE bdc upgrade --name <mssql-cluster> Upgrading cluster to version 15.0.4003 NOTE: Cluster upgrade can take a significant amount of time depending on configuration, network speed, and the number of nodes in the cluster. Upgrading Control Plane. Control plane upgrade failed. Failed to upgrade controller.
This error is more likely to occur when you upgrade BDC in Azure Kubernetes Service (AKS).
Workaround: Increase the timeout for the upgrade.
To increase the timeouts for an upgrade, edit the upgrade config map. To edit the upgrade config map:
Run the following command:
kubectl edit configmap controller-upgrade-configmap
Edit the following fields:
controllerUpgradeTimeoutInMinutesDesignates the number of minutes to wait for the controller or controller db to finish upgrading. Default is 5. Update to at least 20.
totalUpgradeTimeoutInMinutes: Designates the combines amount of time for both the controller and controller db to finish upgrading (controller + controllerdb upgrade).Default is 10. Update to at least 40.
componentUpgradeTimeoutInMinutes: Designates the amount of time that each subsequent phase of the upgrade has to complete. Default is 30. Update to 45.
Save and exit
The python script below is another way to set the timeout:
from kubernetes import client, config import json def set_upgrade_timeouts(namespace, controller_timeout=20, controller_total_timeout=40, component_timeout=45): """ Set the timeouts for upgrades The timeout settings are as follows controllerUpgradeTimeoutInMinutes: sets the max amount of time for the controller or controllerdb to finish upgrading totalUpgradeTimeoutInMinutes: sets the max amount of time to wait for both the controller and controllerdb to complete their upgrade componentUpgradeTimeoutInMinutes: sets the max amount of time allowed for subsequent phases of the upgrade to complete """ config.load_kube_config() upgrade_config_map = client.CoreV1Api().read_namespaced_config_map("controller-upgrade-configmap", namespace) upgrade_config = json.loads(upgrade_config_map.data["controller-upgrade"]) upgrade_config["controllerUpgradeTimeoutInMinutes"] = controller_timeout upgrade_config["totalUpgradeTimeoutInMinutes"] = controller_total_timeout upgrade_config["componentUpgradeTimeoutInMinutes"] = component_timeout upgrade_config_map.data["controller-upgrade"] = json.dumps(upgrade_config) client.CoreV1Api().patch_namespaced_config_map("controller-upgrade-configmap", namespace, upgrade_config_map)
Livy job submission from Azure Data Studio (ADS) or curl fail with 500 error
Issue and customer impact: In an HA configuration, Spark shared resources
sparkheadare configured with multiple replicas. In this case, you might experience failures with Livy job submission from Azure Data Studio (ADS) or
curl. To verify,
sparkheadpod results in refused connection. For example,
curl https://sparkhead-1:8998returns 500 error.
This happens in the following scenarios:
- Zookeeper pods, or processes for each zookeeper instance, are restarted a few times.
- When networking connectivity is unreliable between
sparkheadpod and Zookeeper pods.
Workaround: Restarting both Livy servers.
kubectl -n <clustername> exec sparkhead-0 -c hadoop-livy-sparkhistory supervisorctl restart livy
kubectl -n <clustername> exec sparkhead-1 -c hadoop-livy-sparkhistory supervisorctl restart livy
Create memory optimized table when master instance in an availability group
Issue and customer impact: You cannot use the primary endpoint exposed for connecting to availability group databases (listener) to create memory optimized tables.
Workaround: To create memory optimized tables when SQL Server master instance is an availability group configuration, connect to the SQL Server instance, expose an endpoint, connect to the SQL Server database, and create the memory optimized tables in the session created with the new connection.
Insert to external tables Active Directory authentication mode
Issue and customer impact: When SQL Server master instance is in Active Directory authentication mode, a query that selects only from external tables, where at least one of the external tables is in a storage pool, and inserts into another external table, the query returns:
Msg 7320, Level 16, State 102, Line 1 Cannot execute the query "Remote Query" against OLE DB provider "SQLNCLI11" for linked server "SQLNCLI11". Only domain logins can be used to query Kerberized storage pool.
Workaround: Modify the query in one of the following ways. Either join the storage pool table to a local table, or insert into the local table first, then read from the local table to insert into the data pool.
Transparent Data Encryption capabilities can not be used with databases that are part of the availability group in the SQL Server master instance
Issue and customer impact: In an HA configuration, databases that have encryption enabled can't be used after a failover since the master key used for encryption is different on each replica.
Workaround: There is no workaround for this issue. We recommend to not enable encryption in this configuration until a fix is in place.
For more information about SQL Server Big Data Clusters, see What are SQL Server 2019 Big Data Clusters?