Pre-aggregated Metrics - .NET Core app with Prometheus and Azure Monitor

workflow

Overview

Sample .NET Core Web app that demonstrates different implementations for pre-aggregated metrics. Prometheus and Azure Monitor are two popular choices. However, they each offer differing capabilities. This repository offers examples for 3 different options. It is possible to use just one or all three depending on the scenario.

  1. The Prometheus-Net .NET library is used to export Prometheus-specific metrics.
  2. Agent configuration is used to scrape Prometheus metrics with Azure Monitor. These metrics then populate Container logs InsightsMetrics.
  3. Application Insights .NET Core SDK is used to populate CustomMetrics using the GetMetric method.

A couple of steps to take special note of:

  • A Prometheus server installed on the cluster is configured to collect metrics from all pods.
  • The RequestMiddleware.cs class in the sample application contains the metrics configuration for both Prometheus and GetMetric.

Getting Started

Prerequisites

  • Azure CLI: Create and manage Azure resources.
  • Kubectl: Kubernetes command-line tool which allows you to run commands against Kubernetes clusters.
  • Helm: Package manager for Kubernetes
  • Docker
  • GitHub account

Quickstart - Running the App Locally

Verify the sample application is able to run locally. In order to collect metrics, please continue to the next section to deploy the app to AKS.

  1. Fork this repo to your github account and git clone
  2. cd dotnetapp-azure-prometheus/Application
  3. Run docker-compose up and go to http://localhost:8080 to interact with the application.

Deploy Application to Azure Kubernetes Service

  1. Create a resource group that will hold all the created resources and a service principal to manage and access those resources

    # Set your variables
    #Resource group to hold the resources for this application
    RESOURCEGROUPNAME="insert-resource-group-name-here"
    LOCATION="insert-location-here"
    #Azure subscription ID. Can be located in the Azure portal.
    SUBSCRIPTIONID="insert-subscription-id-here"
    SERVICEPRINCIPAL="insert-service-principal-here"
    
    # login to azure if not already logged in from the cli
    az login
    
    # Create resource group
    az group create --name $RESOURCEGROUPNAME --location $LOCATION
    
    # Create a service principal with Contributor role to the resource group
    az ad sp create-for-rbac --name $SERVICEPRINCIPAL --role contributor --scopes /subscriptions/$SUBSCRIPTIONID/resourceGroups/$RESOURCEGROUPNAME --sdk-auth
    

    CAUTION: There is a known bug with git bash. Git Bash will attempt to auto-translate resource IDs. If you encounter this issue, it can be fixed by appending MSYS_NO_PATHCONV=1 to the command. See this link for further information.

  2. Use the output of the last command as a secret named AZURE_CREDENTIALS in the repository settings (Settings -> Secrets -> New repository secret). Set this as a secret on the repository not on the environment. For more details on configuring the github repository secrets, please see this guide

  3. Github Actions will be used to automate the workflow and deploy all the necessary resources to Azure. Open the .github\workflows\devops-starter-workflow.yml and change the environment variables accordingly. Use the RESOURCEGROUPNAME and value that you created above. Be sure to change at a minimum the named variables, such as the RESOURCEGROUPNAME and the REGISTRYNAME. The REGISTRYNAME identifies the container registry, and it is a globally unique name. The deployment will fail if this value is not unique. This resource can guide you with naming conventions.

  4. Commit your changes. The commit will trigger the build and deploy jobs within the workflow and will provision all the resources to run the sample application.

Install Prometheus


# Define variables
RESOURCE_GROUP="insert-resource-group-here"
CLUSTER_NAME="insert-cluster-name-here"
NAMESPACE="insert-namespace-here"

# Connect to Cluster
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

# Set the default namespace to the application namespace
kubectl config set-context --current --namespace=$NAMESPACE

helm repo add stable https://charts.helm.sh/stable

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics

helm repo update

helm install my-prometheus prometheus-community/prometheus --set server.service.type=LoadBalancer --set rbac.create=false

# Verify the installation by looking at your services
kubectl get services

# Connect your service with Prometheus
helm upgrade my-prometheus prometheus-community/prometheus --set server.service.type=LoadBalancer --set rbac.create=false -f Application/manifests/prometheus.values.yaml

Pod Annotations for Scraping

To configure Prometheus to collect metrics from all pods the following annotations were added to the app deployment.yaml

annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "80"

Configure Prometheus scraping with Azure Monitor

For Prometheus scraping with Azure Monitor, a Prometheus server is not required. The configMap container-azm-ms-agentconfig.yaml, enables scraping of Prometheus metrics from each pod in the cluster and has been configured according to the following:

prometheus-data-collection-settings: |-
# Custom Prometheus metrics data collection settings
[prometheus_data_collection_settings.cluster]
interval = "1m"
# Metrics for Prometheus scraping
fieldpass=["prom_counter_request_total", "prom_histogram_request_duration", "prom_summary_memory", "prom_gauge_memory"]
monitor_kubernetes_pods = true

Run the following command to apply this configMap configuration to the cluster:

kubectl apply -f Application/manifests/container-azm-ms-agentconfig.yaml

Collect Metrics

  1. Get the IP addresses of the sampleapp and the prometheus-server:

    kubectl get services sampleapp
    
  2. Load the sampleapp endpoint and interact with the menu items (Home, About, Contact). Pre-aggregated metrics are configured in the RequestMiddleware.cs. They are available with the following implementations:

    • CustomMetrics: Implementation of metrics using the AppInsights .NET Core SDK and TelemetryClient.GetMetric:

      # Example query that gets the metric for total requests
      
      customMetrics
      | where name == "getmetric_count_requests"
      | extend customDimensions.path
      | order by timestamp desc
      

      custom-metrics

    • Prometheus metrics: Implementation of Prometheus metrics using the prometheus-net .NET library and the /metrics endpoint:

      prometheus-metrics

    Prometheus metrics are scraped using the following:

    • InsightsMetrics: Agent configuration for scraping with Azure Monitor:

      # Example query that gets the prometheus metric for total requests
      
      InsightMetrics
      | where name == "prom_counter_request_total"
      | where parse_json(Tags).method == "GET"
      | extend path = parse_json(Tags).path
      

      insights-metrics

    • Prometheus Server:

      • Get the prometheus server IP address:

        kubectl get services my-prometheus-server
        
      • Load the prometheus server endpoint. The cluster is configured to collect metrics from all pods:

        prometheus-server

Optionally Install Grafana

Grafana can be optionally installed to visualize the web application data and metrics collected once connected with the data source.

helm repo add grafana https://grafana.github.io/helm-charts

helm repo update

helm install my-grafana grafana/grafana  --set rbac.create=false --set service.type=LoadBalancer  --set persistence.enabled=true

# Verify
kubectl get services

Setup Configuration on Grafana

  1. Get the IP address of the Grafana Dashboard

  2. Login with user admin. Get the password:

    kubectl get secret my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    
  3. Follow the setup guide to get a starter dashboard for Kubernetes

License

See LICENSE.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Contributing

See CONTRIBUTING.