How to configure monitoring for Azure Functions

Azure Functions integrates with Application Insights to better enable you to monitor your function apps. Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service that collects data generated by your function app, including information your app writes to logs. Application Insights integration is typically enabled when your function app is created. If your app doesn't have the instrumentation key set, you must first enable Application Insights integration.

You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. To learn more about Application Insights costs, see Manage usage and costs for Application Insights. For more information, see Solutions with high-volume of telemetry.

Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. For a function app, logging is configured in the host.json file.

Note

You can use specially configured application settings to represent specific settings in a host.json file for a specific environment. This lets you effectively change host.json settings without having to republish the host.json file in your project. To learn more, see Override host.json values.

Configure categories

The Azure Functions logger includes a category for every log. The category indicates which part of the runtime code or your function code wrote the log. Categories differ between version 1.x and later versions. The following chart describes the main categories of logs that the runtime creates.

Category Table Description
Function.<YOUR_FUNCTION_NAME> dependencies Dependency data is automatically collected for some services. For successful runs, these logs are at the Information level. To learn more, see Dependencies. Exceptions are logged at the Error level. The runtime also creates Warning level logs, such as when queue messages are sent to the poison queue.
Function.<YOUR_FUNCTION_NAME> customMetrics
customEvents
C# and JavaScript SDKs let you collect custom metrics and log custom events. To learn more, see Custom telemetry data.
Function.<YOUR_FUNCTION_NAME> traces Includes function started and completed logs for specific function runs. For successful runs, these logs are at the Information level. Exceptions are logged at the Error level. The runtime also creates Warning level logs, such as when queue messages are sent to the poison queue.
Function.<YOUR_FUNCTION_NAME>.User traces User-generated logs, which can be any log level. To learn more about writing to logs from your functions, see Writing to logs.
Host.Aggregator customMetrics These runtime-generated logs provide counts and averages of function invocations over a configurable period of time. The default period is 30 seconds or 1,000 results, whichever comes first. Examples are the number of runs, success rate, and duration. All of these logs are written at Information level. If you filter at Warning or above, you won't see any of this data.
Host.Results requests These runtime-generated logs indicate success or failure of a function. All of these logs are written at Information level. If you filter at Warning or above, you won't see any of this data.
Microsoft traces Fully-qualified log category that reflects a .NET runtime component invoked by the host.
Worker traces Logs generated by the language worker process for non-.NET languages. Language worker logs may also be logged in a Microsoft.* category, such as Microsoft.Azure.WebJobs.Script.Workers.Rpc.RpcFunctionInvocationDispatcher. These logs are written at Information level.

Note

For .NET class library functions, these categories assume you are using ILogger and not ILogger<T>. To learn more, see the Functions ILogger documentation.

The Table column indicates to which table in Application Insights the log is written.

Configure log levels

A log level is assigned to every log. The value is an integer that indicates relative importance:

LogLevel Code Description
Trace 0 Logs that contain the most detailed messages. These messages may contain sensitive application data. These messages are disabled by default and should never be enabled in a production environment.
Debug 1 Logs that are used for interactive investigation during development. These logs should primarily contain information useful for debugging and have no long-term value.
Information 2 Logs that track the general flow of the application. These logs should have long-term value.
Warning 3 Logs that highlight an abnormal or unexpected event in the application flow, but don't otherwise cause the application execution to stop.
Error 4 Logs that highlight when the current flow of execution is stopped because of a failure. These errors should indicate a failure in the current activity, not an application-wide failure.
Critical 5 Logs that describe an unrecoverable application or system crash, or a catastrophic failure that requires immediate attention.
None 6 Disables logging for the specified category.

The host.json file configuration determines how much logging a functions app sends to Application Insights.

For each category, you indicate the minimum log level to send. The host.json settings vary depending on the Functions runtime version.

The example below defines logging based on the following rules:

  • For logs of Host.Results or Function, only log events at Error or a higher level.
  • For logs of Host.Aggregator, log all generated metrics (Trace).
  • For all other logs, including user logs, log only Information level and higher events.
{
  "logging": {
    "fileLoggingMode": "always",
    "logLevel": {
      "default": "Information",
      "Host.Results": "Error",
      "Function": "Error",
      "Host.Aggregator": "Trace"
    }
  }
}

If host.json includes multiple logs that start with the same string, the more defined logs ones are matched first. Consider the following example that logs everything in the runtime, except Host.Aggregator, at the Error level:

{
  "logging": {
    "fileLoggingMode": "always",
    "logLevel": {
      "default": "Information",
      "Host": "Error",
      "Function": "Error",
      "Host.Aggregator": "Information"
    }
  }
}

You can use a log level setting of None prevent any logs from being written for a category.

Caution

Azure Functions integrates with Application Insights by storing telemetry events in Application Insights tables, setting a category log level to any value different from Information will prevent the telemetry to flow to those tables, as outcome, you will not be able to see the related data in Application Insights or Function Monitor tab.

From above samples:

  • If Host.Results category is set to Error log level, it will only gather host execution telemetry events in the requests table for failed function executions, preventing to display host execution details of success executions in both Application Insights and Function Monitor tab.
  • If Function category is set to Error log level, it will stop gathering function telemetry data related to dependencies, customMetrics, and customEvents for all the functions, preventing to see any of this data in Application Insights. It will only gather traces logged with Error level.

In both cases you will continue to collect errors and exceptions data in Application Insights and Function Monitor tab. For more information, see Solutions with high-volume of telemetry.

Configure the aggregator

As noted in the previous section, the runtime aggregates data about function executions over a period of time. The default period is 30 seconds or 1,000 runs, whichever comes first. You can configure this setting in the host.json file. Here's an example:

{
    "aggregator": {
      "batchSize": 1000,
      "flushTimeout": "00:00:30"
    }
}

Configure sampling

Application Insights has a sampling feature that can protect you from producing too much telemetry data on completed executions at times of peak load. When the rate of incoming executions exceeds a specified threshold, Application Insights starts to randomly ignore some of the incoming executions. The default setting for maximum number of executions per second is 20 (five in version 1.x). You can configure sampling in host.json. Here's an example:

{
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond" : 20,
        "excludedTypes": "Request;Exception"
      }
    }
  }
}

You can exclude certain types of telemetry from sampling. In this example, data of type Request and Exception is excluded from sampling. This makes sure that all function executions (requests) and exceptions are logged while other types of telemetry remain subject to sampling.

To learn more, see Sampling in Application Insights.

Configure scale controller logs

This feature is in preview.

You can have the Azure Functions scale controller emit logs to either Application Insights or to Blob storage to better understand the decisions the scale controller is making for your function app.

To enable this feature, you add an application setting named SCALE_CONTROLLER_LOGGING_ENABLED to your function app settings. The value of this setting must be of the format <DESTINATION>:<VERBOSITY>, based on the following:

Property Description
<DESTINATION> The destination to which logs are sent. Valid values are AppInsights and Blob.
When you use AppInsights, make sure Application Insights is enabled in your function app.
When you set the destination to Blob, logs are created in a blob container named azure-functions-scale-controller in the default storage account set in the AzureWebJobsStorage application setting.
<VERBOSITY> Specifies the level of logging. Supported values are None, Warning, and Verbose.
When set to Verbose, the scale controller logs a reason for every change in the worker count, as well as information about the triggers that factor into those decisions. Verbose logs include trigger warnings and the hashes used by the triggers before and after the scale controller runs.

Tip

Keep in mind that while you leave scale controller logging enabled, it impacts the potential costs of monitoring your function app. Consider enabling logging until you have collected enough data to understand how the scale controller is behaving, and then disabling it.

For example, the following Azure CLI command turns on verbose logging from the scale controller to Application Insights:

az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> \
--settings SCALE_CONTROLLER_LOGGING_ENABLED=AppInsights:Verbose

In this example, replace <FUNCTION_APP_NAME> and <RESOURCE_GROUP_NAME> with the name of your function app and the resource group name, respectively.

The following Azure CLI command disables logging by setting the verbosity to None:

az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> \
--settings SCALE_CONTROLLER_LOGGING_ENABLED=AppInsights:None

You can also disable logging by removing the SCALE_CONTROLLER_LOGGING_ENABLED setting using the following Azure CLI command:

az functionapp config appsettings delete --name <FUNCTION_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> \
--setting-names SCALE_CONTROLLER_LOGGING_ENABLED

With scale controller logging enabled, you are now able to query your scale controller logs.

Enable Application Insights integration

For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. The key must be in an app setting named APPINSIGHTS_INSTRUMENTATIONKEY.

When you create your function app in the Azure portal, from the command line by using Azure Functions Core Tools, or by using Visual Studio Code, Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.

New function app in the portal

To review the Application Insights resource being created, select it to expand the Application Insights window. You can change the New resource name or choose a different Location in an Azure geography where you want to store your data.

Enable Application Insights while creating a function app

When you choose Create, an Application Insights resource is created with your function app, which has the APPINSIGHTS_INSTRUMENTATIONKEY set in application settings. Everything is ready to go.

Add to an existing function app

If an Application Insights resource wasn't created with your function app, use the following steps to create the resource. You can then add the instrumentation key from that resource as an application setting in your function app.

  1. In the Azure portal, search for and select function app, and then choose your function app.

  2. Select the Application Insights is not configured banner at the top of the window. If you don't see this banner, then your app might already have Application Insights enabled.

    Enable Application Insights from the portal

  3. Expand Change your resource and create an Application Insights resource by using the settings specified in the following table.

    Setting Suggested value Description
    New resource name Unique app name It's easiest to use the same name as your function app, which must be unique in your subscription.
    Location West Europe If possible, use the same region as your function app, or one that's close to that region.

    Create an Application Insights resource

  4. Select Apply.

    The Application Insights resource is created in the same resource group and subscription as your function app. After the resource is created, close the Application Insights window.

  5. In your function app, select Configuration under Settings, and then select Application settings. If you see a setting named APPINSIGHTS_INSTRUMENTATIONKEY, Application Insights integration is enabled for your function app running in Azure. If for some reason this setting doesn't exist, add it using your Application Insights instrumentation key as the value.

Note

Early versions of Functions used built-in monitoring, which is no longer recommended. When enabling Application Insights integration for such a function app, you must also disable built-in logging.

Disable built-in logging

When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.

To disable built-in logging, delete the AzureWebJobsDashboard app setting. For information about how to delete app settings in the Azure portal, see the Application settings section of How to manage a function app. Before you delete the app setting, make sure no existing functions in the same function app use the setting for Azure Storage triggers or bindings.

Solutions with high volume of telemetry

Your function apps can be an essential part of solutions that by nature cause high volumes of telemetry (IoT solutions, event driven based solutions, high load financial systems, integration systems...). In this case, you should consider extra configuration to reduce costs while maintaining observability.

Depending on how the telemetry generated is going to be consumed, real-time dashboards, alerting, detailed diagnostics, and so on, you will need to define a strategy to reduce the volume of data generated. That strategy will allow you to properly monitor, operate, and diagnose your function apps in production. You can consider the following options:

  • Use sampling: as mentioned earlier, it will help to dramatically reduce the volume of telemetry events ingested while maintaining a statistically correct analysis. It could happen that even using sampling you still get high volume of telemetry. Inspect the options that Adaptive sampling provides to you, for example set the maxTelemetryItemsPerSecond to a value that balances the volume generated with your monitoring needs. Keep in mind that the telemetry sampling is applied per host executing your function app.

  • Default log level: use Warning or Error as the default value for all telemetry categories. Now you can decide which categories you want to set at Information so you can monitor and diagnose your functions properly.

  • Tune your functions telemetry: with the default log level set to Error or Warning, no detailed information from each function will be gathered (dependencies, custom metrics, custom events, and traces). For those functions that are key for production monitoring, define an explicit entry for Function.<YOUR_FUNCTION_NAME> category and set it to Information, so you can gather detailed information. At this point, to avoid gathering user-generated logs at Information level, set the Function.<YOUR_FUNCTION_NAME>.User category to Error or Warning log level.

  • Host.Aggregator category: as described in Configure categories, this category provides aggregated information of function invocations. The information from this category is gathered in Application Insights customMetrics table, and it's shown in the function Overview tab in the Azure portal. Depending on how you configure the aggregator, consider that there will be a delay, determined by the flushTimeout, in the telemetry gathered. If you set this category to other value different than Information, you will stop gathering the data in the customMetrics table and will not display metrics in the function Overview tab.

    The following screenshot shows Host.Aggregator telemetry data displayed in the function Overview tab. Screenshot of Host.Aggregator telemetry displayed in function Overview tab.

    The following screenshot shows Host.Aggregator telemetry data in Application Insights customMetrics table. Screenshot of Host.Aggregator telemetry in customMetrics Application Insights table.

  • Host.Results category: as described in Configure categories, this category provides the runtime-generated logs indicating the success or failure of a function invocation. The information from this category is gathered in the Application Insights requests table, and it is shown in the function Monitor tab and in different Application Insights dashboards (Performance, Failures...). If you set this category to other value different than Information, you will only gather telemetry generated at the log level defined (or higher), for example, setting it to error results in tracking requests data only for failed executions.

    The following screenshot shows the Host.Results telemetry data displayed in the function Monitor tab. Screenshot of Host.Results telemetry in function Monitor tab.

    The following screenshot shows Host.Results telemetry data displayed in Application Insights Performance dashboard. Screenshot of Host.Results telemetry in Application Insights Performance dashboard.

  • Host.Aggregator vs Host.Results: both categories provide good insights about function executions, if needed, you can remove the detailed information from one of these categories, so you can use the other for monitoring and alerting. Here's a sample:

{
  "version": "2.0",  
  "logging": {
    "logLevel": {
      "default": "Warning",
      "Function": "Error",
      "Host.Aggregator": "Error",
      "Host.Results": "Information", 
      "Function.Function1": "Information",
      "Function.Function1.User": "Error"
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond": 1,
        "excludedTypes": "Exception"
      }
    }
  }
} 

With this configuration, you will have:

  • The default value for all functions and telemetry categories is set to Warning (including Microsoft and Worker categories) so, by default, all errors and warnings generated by both, the runtime and custom logging, are gathered.

  • The Function category log level is set to Error, so for all functions, by default, only exceptions and error logs will be gathered (dependencies, user-generated metrics, and user-generated events will be skipped).

  • For the Host.Aggregator category, as it is set to Error log level, no aggregated information from function invocations will be gathered in the customMetrics Application Insights table, and no information about executions counts (total, successful, failed...) will be shown in the function overview dashboard.

  • For the Host.Results category, all the host execution information is gathered in the requests Application Insights table. All the invocations results will be shown in the function Monitor dashboard and in Application Insights dashboards.

  • For the function called Function1, we have set the log level to Information so, for this concrete function, all the telemetry is gathered (dependency, custom metrics, custom events). For the same function, the Function1.User category (user-generated traces) is set to Error, so only custom error logging will be gathered. Note that per function configuration is not supported in v1.x.

  • Sampling is configured to send one telemetry item per second per type, excluding the exceptions. This sampling will happen for each server host running our function app, so if we have four instances, this configuration will emit four telemetry items per second per type and all the exceptions that might occur. Note that, for metrics counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so that they show approximately correct values in Metric Explorer.

Tip

Experiment with different configurations to ensure you cover your requirements for logging, monitoring and alerting. Ensure you have detailed diagnostics in case of unexpected errors or malfunctioning.

Overriding monitoring configuration at runtime

Finally, there could be situations where you need to quickly change the logging behavior of a certain category in production, and you don't want to make a whole deployment just for a change in the host.json file. For such as cases, you can override the host json values.

To configure these values at App settings level (and avoid redeployment on just host.json changes), you should override specific host.json values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format AzureFunctionsJobHost__path__to__setting, it overrides the equivalent host.json setting located at path.to.setting in the JSON. When expressed as an application setting, the dot (.) used to indicate JSON hierarchy is replaced by a double underscore (__). For example, you can use the below app settings to configure individual function log levels as in host.json above.

Host.json path App setting
logging.logLevel.default AzureFunctionsJobHost__logging__logLevel__default
logging.logLeve.Host.Aggregator AzureFunctionsJobHost__logging__logLevel__Host__Aggregator
logging.logLevel.Function AzureFunctionsJobHost__logging__logLevel__Function
logging.logLevel.Function.Function1 AzureFunctionsJobHost__logging__logLevel__Function1
logging.logLevel.Function.Function1.User AzureFunctionsJobHost__logging__logLevel__Function1.User

You can override the settings directly at the Azure portal Function App Configuration blade or by using an Azure CLI or PowerShell script.

az functionapp config appsettings set --name MyFunctionApp --resource-group MyResourceGroup --settings "AzureFunctionsJobHost__logging__logLevel__Host__Aggregator=Information"

Note

Overriding the host.json through changing app settings will restart your function app.

Next steps

To learn more about monitoring, see: