Configure local metrics and logs for Azure API Management self-hosted gateway
The self-hosted gateway supports StatsD, which has become a unifying protocol for metrics collection and aggregation. This section walks through the steps for deploying StatsD to Kubernetes, configuring the gateway to emit metrics via StatsD, and using Prometheus to monitor the metrics.
Deploy StatsD and Prometheus to the cluster
Below is a sample YAML configuration for deploying StatsD and Prometheus to the Kubernetes cluster where a self-hosted gateway is deployed. It also creates a Service for each. The self-hosted gateway will publish metrics to the StatsD Service. We will access the Prometheus dashboard via its Service.
The following example pulls public container images from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the images in a private Azure container registry. Learn more about working with public images.
apiVersion: v1 kind: ConfigMap metadata: name: sputnik-metrics-config data: statsd.yaml: "" prometheus.yaml: | global: scrape_interval: 3s evaluation_interval: 3s scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'test_metrics' static_configs: - targets: ['localhost:9102'] --- apiVersion: apps/v1 kind: Deployment metadata: name: sputnik-metrics spec: replicas: 1 selector: matchLabels: app: sputnik-metrics template: metadata: labels: app: sputnik-metrics spec: containers: - name: sputnik-metrics-statsd image: prom/statsd-exporter ports: - name: tcp containerPort: 9102 - name: udp containerPort: 8125 protocol: UDP args: - --statsd.mapping-config=/tmp/statsd.yaml - --statsd.listen-udp=:8125 - --web.listen-address=:9102 volumeMounts: - mountPath: /tmp name: sputnik-metrics-config-files - name: sputnik-metrics-prometheus image: prom/prometheus ports: - name: tcp containerPort: 9090 args: - --config.file=/tmp/prometheus.yaml volumeMounts: - mountPath: /tmp name: sputnik-metrics-config-files volumes: - name: sputnik-metrics-config-files configMap: name: sputnik-metrics-config --- apiVersion: v1 kind: Service metadata: name: sputnik-metrics-statsd spec: type: NodePort ports: - name: udp port: 8125 targetPort: 8125 protocol: UDP selector: app: sputnik-metrics --- apiVersion: v1 kind: Service metadata: name: sputnik-metrics-prometheus spec: type: LoadBalancer ports: - name: http port: 9090 targetPort: 9090 selector: app: sputnik-metrics
Save the configurations to a file named
metrics.yaml and use the below command to deploy everything to the cluster:
kubectl apply -f metrics.yaml
Once the deployment finishes, run the below command to check the Pods are running. Note that your pod name will be different.
kubectl get pods NAME READY STATUS RESTARTS AGE sputnik-metrics-f6d97548f-4xnb7 2/2 Running 0 1m
Run the below command to check the Services are running. Take a note of the
PORT of the StatsD Service, we would need it later. You can visit the Prometheus dashboard using its
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sputnik-metrics-prometheus LoadBalancer 10.0.252.72 126.96.36.199 9090:32663/TCP 18h sputnik-metrics-statsd NodePort 10.0.41.179 <none> 8125:32733/UDP 18h
Configure the self-hosted gateway to emit metrics
Now that both StatsD and Prometheus have been deployed, we can update the configurations of the self-hosted gateway to start emitting metrics through StatsD. The feature can be enabled or disabled using the
telemetry.metrics.local key in the ConfigMap of the self-hosted gateway Deployment with additional options. Below is a breakdown of the available options:
||Enables logging through StatsD. Value can be
|telemetry.metrics.local.statsd.endpoint||n/a||Specifies StatsD endpoint.|
|telemetry.metrics.local.statsd.sampling||n/a||Specifies metrics sampling rate. Value can be between 0 and 1. e.g.,
|telemetry.metrics.local.statsd.tag-format||n/a||StatsD exporter tagging format. Value can be
Here is a sample configuration:
apiVersion: v1 kind: ConfigMap metadata: name: contoso-gateway-environment data: config.service.endpoint: "<self-hosted-gateway-management-endpoint>" telemetry.metrics.local: "statsd" telemetry.metrics.local.statsd.endpoint: "10.0.41.179:8125" telemetry.metrics.local.statsd.sampling: "1" telemetry.metrics.local.statsd.tag-format: "dogStatsD"
Update the YAML file of the self-hosted gateway deployment with the above configurations and apply the changes using the below command:
kubectl apply -f <file-name>.yaml
To pick up the latest configuration changes, restart the gateway deployment using the below command:
kubectl rollout restart deployment/<deployment-name>
View the metrics
Now we have everything deployed and configured, the self-hosted gateway should report metrics via StatsD. Prometheus will pick up the metrics from StatsD. Go to the Prometheus dashboard using the
PORT of the Prometheus Service.
Make some API calls through the self-hosted gateway, if everything is configured correctly, you should be able to view below metrics:
|requests_total||Number of API requests in the period|
|request_duration_seconds||Number of milliseconds from the moment gateway received request until the moment response sent in full|
|request_backend_duration_seconds||Number of milliseconds spent on overall backend IO (connecting, sending and receiving bytes)|
|request_client_duration_seconds||Number of milliseconds spent on overall client IO (connecting, sending and receiving bytes)|
The self-hosted gateway outputs logs to
stderr by default. You can easily view the logs using the following command:
kubectl logs <pod-name>
If your self-hosted gateway is deployed in Azure Kubernetes Service, you can enable Azure Monitor for containers to collect
stderr from your workloads and view the logs in Log Analytics.
The self-hosted gateway also supports a number of protocols including
journal. The below table summarizes all the options supported.
||Enables logging to standard streams. Value can be
||Enables local logging. Value can be
|telemetry.logs.local.localsyslog.endpoint||n/a||Specifies localsyslog endpoint.|
|telemetry.logs.local.localsyslog.facility||n/a||Specifies localsyslog facility code. e.g.,
|telemetry.logs.local.rfc5424.endpoint||n/a||Specifies rfc5424 endpoint.|
|telemetry.logs.local.rfc5424.facility||n/a||Specifies facility code per rfc5424. e.g.,
|telemetry.logs.local.journal.endpoint||n/a||Specifies journal endpoint.|
|telemetry.logs.local.json.endpoint||127.0.0.1:8888||Specifies UDP endpoint that accepts JSON data: file path, IP:port, or hostname:port.|
Here is a sample configuration of local logging:
apiVersion: v1 kind: ConfigMap metadata: name: contoso-gateway-environment data: config.service.endpoint: "<self-hosted-gateway-management-endpoint>" telemetry.logs.std: "text" telemetry.logs.local.localsyslog.endpoint: "/dev/log" telemetry.logs.local.localsyslog.facility: "7"
- To learn more about the observability capabilities of the Azure API Management gateways.
- To learn more about the self-hosted gateway, see Azure API Management self-hosted gateway overview
- Learn about configuring and persisting logs in the cloud
Submit and view feedback for