You are viewing documentation for the old version of the Azure Container Service. Azure Kubernetes Service (AKS) is being updated to add new deployment options, enhanced management capabilities, and cost benefit to Kubernetes on Azure. Visit the AKS documentation to start working with these preview features.
Container Service frequently asked questions
Which container orchestrators do you support on Azure Container Service?
There is support for open-source DC/OS, Docker Swarm, and Kubernetes. For more information, see the Overview.
Do you support Docker Swarm mode?
Currently Swarm mode is not supported, but it is on the service roadmap.
Does Azure Container Service support Windows containers?
Currently Linux containers are supported with all orchestrators. Support for Windows containers with Kubernetes is in preview.
Do you recommend a specific orchestrator in Azure Container Service?
Generally we do not recommend a specific orchestrator. If you have experience with one of the supported orchestrators, you can apply that experience in Azure Container Service. Data trends suggest, however, that DC/OS is production proven for Big Data and IoT workloads, Kubernetes is suited for cloud-native workloads, and Docker Swarm is known for its integration with Docker tools and easy learning curve.
What is the difference between Azure Container Service and ACS Engine?
Azure Container Service is an SLA-backed Azure service with features in the Azure portal, Azure command-line tools, and Azure APIs. The service enables you to quickly implement and manage clusters running standard container orchestration tools with a relatively small number of configuration choices.
ACS Engine is an open-source project that enables power users to customize the cluster configuration at every level. This ability to alter the configuration of both infrastructure and software means that we offer no SLA for ACS Engine. Support is handled through the open-source project on GitHub rather than through official Microsoft channels.
For additional details please refer to our support policy for containers.
How do I create SSH keys for my cluster?
You can use standard tools on your operating system to create an SSH RSA public and private key pair for authentication against the Linux virtual machines for your cluster. For steps, see the OS X and Linux or Windows guidance.
If you use Azure CLI commands to deploy a container service cluster, SSH keys can be automatically generated for your cluster.
How do I create a service principal for my Kubernetes cluster?
An Azure Active Directory service principal ID and password are also needed to create a Kubernetes cluster in Azure Container Service. For more information, see About the service principal for a Kubernetes cluster.
If you use Azure CLI commands to deploy a Kubernetes cluster, service principal credentials can be automatically generated for your cluster.
How large a cluster can I create?
You can create a cluster with 1, 3, or 5 master nodes. You can choose up to 100 agent nodes.
For larger clusters and depending on the VM size you choose for your nodes, you might need to increase the cores quota in your subscription. To request a quota increase, open an online customer support request at no charge. If you're using an Azure free account, you can use only a limited number of Azure compute cores.
How do I increase the number of masters after a cluster is created?
Once the cluster is created, the number of masters is fixed and cannot be changed. During the creation of the cluster, you should ideally select multiple masters for high availability.
How do I increase the number of agents after a cluster is created?
You can scale the number of agents in the cluster by using the Azure portal or command-line tools. See Scale an Azure Container Service cluster.
What are the URLs of my masters and agents?
The URLs of cluster resources in Azure Container Service are based on the DNS name prefix you supply and the name of the Azure region you chose for deployment. For example, the fully qualified domain name (FQDN) of the master node is of this form:
You can find commonly used URLs for your cluster in the Azure portal, the Azure Resource Explorer, or other Azure tools.
How do I tell which orchestrator version is running in my cluster?
- DC/OS: See the Mesosphere documentation
- Docker Swarm: Run
- Kubernetes: Run
How do I upgrade the orchestrator after deployment?
Currently, Azure Container Service doesn't provide tools to upgrade the version of the orchestrator you deployed on your cluster. If Container Service supports a later version, you can deploy a new cluster. Another option is to use orchestrator-specific tools if they are available to upgrade a cluster in-place. For example, see DC/OS Upgrading.
Where do I find the SSH connection string to my cluster?
You can find the connection string in the Azure portal, or by using Azure command-line tools.
In the portal, navigate to the resource group for the cluster deployment.
Click Overview and click the link for Deployments under Essentials.
In the Deployment history blade, click the deployment that has a name beginning with microsoft-acs followed by a deployment date. Example: microsoft-acs-201701310000.
On the Summary page, under Outputs, several cluster links are provided. SSHMaster0 provides an SSH connection string to the first master in your container service cluster.
As previously noted, you can also use Azure tools to find the FQDN of the master. Make an SSH connection to the master using the FQDN of the master and the user name you specified when creating the cluster. For example:
ssh userName@masterFQDN –A –p 22
For more information, see Connect to an Azure Container Service cluster.
My DNS name resolution isn't working on Windows. What should I do?
There are some known DNS issues on Windows whose fixes are still actively being phased out. Please ensure you are using the most updated acs-engine and Windows version (with KB4074588 and KB4089848 installed) so that your environment can benefit from this. Otherwise, please see the table below for mitigation steps:
|When workload container is unstable and crashes, the network namespace is cleaned up||Redeploy any affected services|
|Service VIP access is broken||Configure a DaemonSet to always keep one normal (non-privileged) pod running|
|When node on which container is running becomes unavailable, DNS queries may fail resulting in a "negative cache entry"||Run the following inside affected containers:
If this still doesn't resolve the problem, then try to disable DNS caching completely: