Understand monitoring costs for Container insights

This article provides pricing guidance for Container insights to help you understand the following:

  • How to estimate costs up-front before you enable this Insight

  • How to measure costs after Container insights has been enabled for one or more containers

  • How to control the collection of data and make cost reductions

Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster.

The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters.

Note

All sizes and pricing are for sample estimation only. Please refer to the Azure Monitor pricing page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.

The following is a summary of what types of data are collected from a Kubernetes cluster with Container insights that influences cost and can be customized based on your usage:

  • Stdout, stderr container logs from every monitored container in every Kubernetes namespace in the cluster

  • Container environment variables from every monitored container in the cluster

  • Completed Kubernetes jobs/pods in the cluster that does not require monitoring

  • Active scraping of Prometheus metrics

  • Diagnostic log collection of Kubernetes master node logs in your AKS cluster to analyze log data generated by master components such as the kube-apiserver and kube-controller-manager.

What is collected from Kubernetes clusters

Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics listed below are collected by default every one minute.

Node metrics collected

The following list is the 24 metrics per node that are collected:

  • cpuUsageNanoCores
  • cpuCapacityNanoCores
  • cpuAllocatableNanoCores
  • memoryRssBytes
  • memoryWorkingSetBytes
  • memoryCapacityBytes
  • memoryAllocatableBytes
  • restartTimeEpoch
  • used (disk)
  • free (disk)
  • used_percent (disk)
  • io_time (diskio)
  • writes (diskio)
  • reads (diskio)
  • write_bytes (diskio)
  • write_time (diskio)
  • iops_in_progress (diskio)
  • read_bytes (diskio)
  • read_time (diskio)
  • err_in (net)
  • err_out (net)
  • bytes_recv (net)
  • bytes_sent (net)
  • Kubelet_docker_operations (kubelet)

Container metrics

The following list is the eight metrics per container collected:

  • cpuUsageNanoCores
  • cpuRequestNanoCores
  • cpuLimitNanoCores
  • memoryRssBytes
  • memoryWorkingSetBytes
  • memoryRequestBytes
  • memoryLimitBytes
  • restartTimeEpoch

Cluster inventory

The following list is the cluster inventory data collected by default:

  • KubePodInventory – 1 per minute per container
  • KubeNodeInventory – 1 per node per minute
  • KubeServices – 1 per service per minute
  • ContainerInventory – 1 per container per minute

Estimating costs to monitor your AKS cluster

The estimation below is based on an Azure Kubernetes Service (AKS) cluster with the following sizing example. Also, the estimate applies only for metrics and inventory data collected. For container logs (stdout, stderr, and environmental variables), it varies based on the log sizes generated by the workload, and they are excluded from our estimation.

If you enabled monitoring of an AKS cluster configured as follows,

  • Three nodes
  • Two disks per node
  • One network interface per node
  • 20 pods (one container in each pod = 20 containers in total)
  • Two Kubernetes namespaces
  • Five Kubernetes services (includes kube-system pods, services, and namespace)
  • Collection frequency = 60 secs (default)

You can see the tables and volume of data generated per hour in the assigned Log Analytics workspace. For more information about each of these tables, see Azure Monitor Logs tables.

Table Size estimate (MB/hour)
Perf 12.9
InsightsMetrics 11.3
KubePodInventory 1.5
KubeNodeInventory 0.75
KubeServices 0.13
ContainerInventory 3.6
KubeHealth 0.1
KubeMonAgentEvents 0.005

Total = 31 MB/Hour = 23.1 GB/month (one month = 31 days)

Using the default pricing for Log Analytics, which is a Pay-As-You-Go model, you can estimate the Azure Monitor cost per month. After including a capacity reservation, the price would be higher per month depending on the reservation selected.

Controlling ingestion to reduce cost

Consider a scenario where your organization's different business unit shares Kubernetes infrastructure and a Log Analytics workspace. With each business unit separated by a Kubernetes namespace. You can visualize how much data is ingested in each workspace using the Data Usage runbook which is available from the View Workbooks dropdown.

View workbooks dropdown

This workbook helps you to visualize the source of your data without having to build your own library of queries from what we share in our documentation. In this workbook, there are charts with which you can view billable data from such perspectives as:

  • Total billable data ingested in GB by solution
  • Billable data ingested by Container logs(application logs)
  • Billable container logs data ingested per by Kubernetes namespace
  • Billable container logs data ingested segregated by Cluster name
  • Billable container log data ingested by log source entry
  • Billable diagnostic data ingested by diagnostic master node logs

Data usage workbook

To learn about managing rights and permissions to the workbook, review Access control.

After completing your analysis to determine which source or sources are generating the most data or more data that are exceeding your requirements, you can reconfigure data collection. Details on configuring collection of stdout, stderr, and environmental variables is described in the Configure agent data collection settings article.

The following are examples of what changes you can apply to your cluster by modifying the ConfigMap file to help control cost.

  1. Disable stdout logs across all namespaces in the cluster by modifying the following in the ConfigMap file:

    [log_collection_settings]       
       [log_collection_settings.stdout]          
          enabled = false
    
  2. Disable collecting stderr logs from your development namespace (for example, dev-test), and continue collecting stderr logs from other namespaces (for example, prod and default) by modifying the following in the ConfigMap file:

    Note

    The kube-system log collection is disabled by default. The default setting is retained, adding dev-test namespace to the list of exclusion namespaces is applied to stderr log collection.

    [log_collection_settings.stderr]          
       enabled = true          
          exclude_namespaces = ["kube-system", "dev-test"]
    
  3. Disable environment variable collection across the cluster by modifying the following in the ConfigMap file. This is applicable to all containers in every Kubernetes namespace.

    [log_collection_settings.env_var]
        enabled = false
    
  4. To clean up completed jobs, specify the cleanup policy in the job definition by modifying the following in the ConfigMap file:

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: pi-with-ttl
    spec:
      ttlSecondsAfterFinished: 100
    

After applying one or more of these changes to your ConfigMaps, see Applying updated ConfigMap to apply it to your cluster.

Prometheus metrics scraping

If you are utilizing Prometheus metric scraping, ensure you consider the following to limit the number of metrics that you collect from your cluster:

  • Ensure scraping frequency is set optimally (the default is 60 seconds). While you can increase the frequency to 15 seconds, you need to ensure that the metrics you are scraping are published at that frequency. Otherwise there will be many duplicate metrics scraped and sent to your Log Analytics workspace at intervals adding to data ingestion and retention costs, but are of less value.

  • Container insights supports exclusion & inclusion lists by metric name. For example, if you are scraping kubedns metrics in your cluster, there might be hundreds of them that gets scraped by default, but you are most likely only interested in a subset. Confirm you specified a list of metrics to scrape, or exclude others except a few to save on data ingestion volume. It is easy to enable scraping and not use many of those metrics, which will only add additional charges to your Log Analytics bill.

  • When scraping through pod annotations, ensure you filter by namespace so that you exclude scraping of pod metrics from namespaces that you don't use (for example, dev-test namespace).

Next steps

For more information about how to understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see Manage your usage and estimate costs.