Enable and review Kubernetes master node logs in Azure Kubernetes Service (AKS)

With Azure Kubernetes Service (AKS), the master components such as the kube-apiserver and kube-controller-manager are provided as a managed service. You create and manage the nodes that run the kubelet and container runtime, and deploy your applications through the managed Kubernetes API server. To help troubleshoot your application and services, you may need to view the logs generated by these master components. This article shows you how to use Azure Log Analytics to enable and query the logs from the Kubernetes master components.

Before you begin

This article requires an existing AKS cluster running in your Azure account. If you do not already have an AKS cluster, create one using the Azure CLI or Azure portal. Log Analytics works with both RBAC and non-RBAC enabled AKS clusters.

Enable diagnostics logs

To help collect and review data from multiple sources, Log Analytics provides a query language and analytics engine that provides insights to your environment. A workspace is used to collate and analyze the data, and can integrate with other Azure services such as Application Insights and Security Center. To use a different platform to analyze the logs, you can instead choose to send diagnostic logs to an Azure storage account or event hub. For more information, see What is Azure Log Analytics?.

Log Analytics is enabled and managed in the Azure portal. To enable log collection for the Kubernetes master components in your AKS cluster, open the Azure portal in a web browser and complete the following steps:

  1. Select the resource group for your AKS cluster, such as myResourceGroup. Don't select the resource group that contains your individual AKS cluster resources, such as MC_myResourceGroup_myAKSCluster_eastus.
  2. On the left-hand side, choose Diagnostic settings.
  3. Select your AKS cluster, such as myAKSCluster, then choose to Turn on diagnostics.
  4. Enter a name, such as myAKSLogs, then select the option to Send to Log Analytics.
    • Choose to Configure Log Analytics, then select an existing workspace or Create New Workspace.
    • If you need to create a workspace, provide a name, a resource group, and a location.
  5. In the list of available logs, select the logs you wish to enable, such as kube-apiserver, kube-controller-manager, and kube-scheduler. You can return and change the collected logs once Log Analytics are enabled.
  6. When ready, select Save to enable collection of the selected logs.

The following example portal screenshot shows the Diagnostics settings window and then option to create an Log Analytics workspace:

Enable Log Analytics workspace for Log Analytics of AKS cluster

Note

OMS workspaces are now referred to as Log Analytics workspaces.

Schedule a test pod on the AKS cluster

To generate some logs, create a new pod in your AKS cluster. The following example YAML manifest can be used to create a basic NGINX instance. Create a file named nginx.yaml in an editor of your choice and paste the following content:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: mypod
    image: nginx:1.15.5
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 250m
        memory: 256Mi
    ports:
    - containerPort: 80

Create the pod with the kubectl create command and specify your YAML file, as shown in the following example:

$ kubectl create -f nginx.yaml

pod/nginx created

View collected logs

It may take a few minutes for the diagnostics logs to be enabled and appear in the Log Analytics workspace. In the Azure portal, select the resource group for your Log Analytics workspace, such as myResourceGroup, then choose your Log Analytics resource, such as myAKSLogs.

Select the Log Analytics workspace for your AKS cluster

On the left-hand side, choose Logs. To view the kube-apiserver, enter the following query in the text box:

AzureDiagnostics
| where Category == "kube-apiserver"
| project log_s

Many logs are likely returned for the API server. To scope down the query to view the logs about the NGINX pod created in the previous step, add an additional where statement to search for pods/nginx as shown in the following example query:

AzureDiagnostics
| where Category == "kube-apiserver"
| where log_s contains "pods/nginx"
| project log_s

The specific logs for your NGINX pod are displayed, as shown in the following example screenshot:

Log Analytics query results for sample NGINX pod

To view additional logs, you can update the query for the Category name to kube-controller-manager or kube-scheduler, depending on what additional logs you enable. Additional where statements can then be used to refine the events you are looking for.

For more information on how to query and filter your log data, see View or analyze data collected with Log Analytics log search.

Log event schema

To help analyze the log data, the following table details the schema used for each event:

Field name Description
resourceId Azure resource that produced the log
time Timestamp of when the log was uploaded
category Name of container/component generating the log
operationName Always Microsoft.ContainerService/managedClusters/diagnosticLogs/Read
properties.log Full text of the log from the component
properties.stream stderr or stdout
properties.pod Pod name that the log came from
properties.containerID Id of the docker container this log came from

Next steps

In this article, you learned how to enable and review the logs for the Kubernetes master components in your AKS cluster. To monitor and troubleshoot further, you can also view the Kubelet logs and enable SSH node access.