Get started with SQL Server Big Data Clusters
This article provides an overview of how to deploy SQL Server 2019 Big Data Clusters. It is meant to orient you to the concepts and provide a framework for understanding the other deployment articles in this section. Your specific deployment steps vary based on your platform choices for the client and server.
To quickly get an environment with Kubernetes and big data cluster deployed to help you ramp up on its capabilities, use one of the sample scripts pointed to in the scripts section. After deployment, to manage the cluster use the client tools in the following section.
Big data clusters require a specific set of client tools. Before you deploy a big data cluster to Kubernetes, you should install the following tools:
|azdata||Deploys and manages big data clusters.|
|kubectl||Creates and manages the underlying Kubernetes cluster.|
|Azure Data Studio||Graphical interface for using the big data cluster.|
|SQL Server 2019 extension||Azure Data Studio extension that enables big data cluster features.|
Other tools are required for different scenarios. Each article should explain the prerequisite tools for performing a specific task. For a full list of tools and installation links, see Install SQL Server 2019 big data tools.
Big data clusters are deployed as a series of interrelated containers that are managed in Kubernetes. You can host Kubernetes in a variety of ways. Even if you already have an existing Kubernetes environment, you should review the related requirements for big data clusters.
Azure Kubernetes Service (AKS): AKS allows you to deploy a managed Kubernetes cluster in Azure. You only manage and maintain the agent nodes. With AKS, you don't have to provision your own hardware for the cluster. It is also easy to use a python script or a deployment notebook to create the AKS cluster and deploy the big data cluster in one step. For more information about configuring AKS for a big data cluster deployment, see Configure Azure Kubernetes Service for SQL Server 2019 Big Data Clusters deployments.
Multiple machines: You can also deploy Kubernetes to multiple Linux machines, which could be physical servers or virtual machines. The kubeadm tool can be used to create the Kubernetes cluster. You can use a bash script to automate this type of deployment. This method works well if you already have existing infrastructure that you want to use for your big data cluster. For more information about using kubeadm deployments with big data clusters, see Configure Kubernetes on multiple machines for SQL Server 2019 Big Data Clusters deployments.
Minikube: Minikube allows you to run Kubernetes locally on a single server. It is a good option if you are trying out big data clusters or need to use it in a testing or development scenario. For more information about using Minikube, see the Minikube documentation. For specific requirements for using Minikube with big data clusters, see Configure minikube for SQL Server 2019 big data cluster deployments.
Deploy a big data cluster
After configuring Kubernetes, you deploy a big data cluster with the
azdata bdc create command. When deploying, you can take several different approaches.
If you are deploying to a dev-test environment, you can choose to use one of the default configurations provided by azdata.
To customize your deployment, you can create and use your own deployment configuration files.
For a completely unattended installation, you can pass all other settings in environment variables. For more information, see unattended deployments.
Deployment scripts can help deploy both Kubernetes and big data clusters in a single step. They also often provide default values for big data cluster settings. You can customize any deployment script by creating your own version that configures the big data cluster deployment differently.
The following deployment scripts are currently available:
- Python script -- Deploy a big data cluster on Azure Kubernetes Service (AKS)
- Bash script -- Deploy a big data cluster to a single node kubeadm cluster
You can also deploy a big data cluster by running an Azure Data Studio notebook. For more information on how to use a notebook to deploy on AKS, see the following article: