This feature of Application Insights is GA for App Services and in preview for Compute.
Find out how much time is spent in each method in your live web application by using the profiling tool of Azure Application Insights. It shows you detailed profiles of live requests that were served by your app, and highlights the 'hot path' that is using the most time. It automatically selects examples that have different response times. The profiler uses various techniques to minimize overhead.
The profiler currently works for ASP.NET web apps running on Azure App Services, in at least the Basic pricing tier.
Enable the profiler
Install Application Insights in your code. If it's already installed, make sure you have the latest version. (To do this, right-click your project in Solution Explorer, and choose Manage NuGet packages. Select Updates and update all packages.) Re-deploy your app.
Using ASP.NET Core? Check here.
In https://portal.azure.com, open the Application Insights resource for your web app. Open Performance and click Enable Application Insights Profiler....
Alternatively, you can always click Configure to view status, enable, or disable the Profiler.
Web apps that are configured with Application Insights are listed on Configure blade. Follow instructions to install the Profiler agent if needed. If no web app is configured with Application Insights yet, click Add Linked Apps.
Use the Enable Profiler or Disable Profiler buttons in the Configure blade to control the Profiler on all your linked web apps.
Disable the profiler
To stop or restart the profiler for an individual App Service instance, you'll find it in the App Service resource, in Web Jobs. To delete it, look under Extensions.
We recommend that you have the Profiler enabled on all your web apps to discover any performance issues as soon as possible.
If you use WebDeploy to deploy changes to your web application, ensure that you exclude the App_Data folder from being deleted during deployment. Otherwise, the profiler extension's files will be deleted when you next deploy the web application to Azure.
Using profiler with Azure VMs and Compute resources (preview)
When you enable Application Insights for Azure app services at run time, Profiler is automatically available. (If you already enabled Application Insights for the resource, you might need to update to the latest version through the Configure wizard.)
The default data retention is 5 days. Maximum 10 GB ingested per day.
There is no charge for the profiler service. Your web app must be hosted in at least the Basic tier of App Services.
Overhead and sampling algorithm
The Profiler randomly runs 2 minutes every hour on each Virtual Machine that hosts the application with Profiler enabled to capture traces. When the Profiler is running, it adds between 5-15% CPU overhead to the server. The more servers available for hosting the application, the less impact Profiler has on the overall application performance. This is because the sampling algorithm results in the profiler running on only run on 5% of servers at any given time, and more servers will be available to serve web requests to offset the servers having overhead from the Profiler.
Viewing profiler data
Open the Performance blade and scroll down to the operation list.
The columns in the table are:
- Count - The number of these requests in the time range of the blade.
- Median - The typical time your app takes to respond to a request. Half of all responses were faster than this.
- 95th percentile 95% of responses were faster than this. If this figure is very different from the median, there might be an intermittent problem with your app. (Or it might be explained by a design feature such as caching.)
- Profiler Traces - an icon indicates that the profiler has captured stack traces for this operation.
Click the View button to open the trace explorer. The explorer shows several samples that the profiler has captured, classified by response time.
If you are using Preview Performance Blade, go to Take Actions section on the bottom right corner to view profiler traces. Click Profiler Traces button.
Select a sample to show a code-level breakdown of time spent executing the request.
Show hot path opens the biggest leaf node, or at least something close. In most cases this node will be adjacent to a performance bottleneck.
- Label: The name of the function or event. The tree shows a mix of code and events that occurred (such as SQL and http events). The top event represents the overall request duration.
- Elapsed: The time interval between the start of the operation and the end.
- When: Shows when the function/event was running in relation to other functions.
How to read performance data
Microsoft service profiler uses a combination of sampling method and instrumentation to analyze the performance of your application. When detailed collection is in progress, service profiler samples the instruction pointer of each of the machine's CPU in every millisecond. Each sample captures the complete call stack of the thread currently executing, giving detailed and useful information about what that thread was doing at both high and low levels of abstraction. Service profiler also collects other events such as context switching events, TPL events, and thread-pool events to track activity correlation and causality.
The call stack shown in the timeline view is the result of the above sampling and instrumentation. Because each sample captures the complete call stack of the thread, it includes code from the .NET framework, as well as other frameworks you reference.
Object Allocation (
clr!JIT\_New or clr!JIT\_Newarr1)
clr!JIT\_New and clr!JIT\_Newarr1 are helper functions inside .NET framework that allocates memory from managed heap.
clr!JIT\_New is invoked
when an object is allocated.
clr!JIT\_Newarr1 is invoked when an object array is allocated. These two functions are typically
very fast and should take relatively small amount of time. If you see
clr!JIT\_Newarr1 take a substantial amount of
time in your timeline, it's an indication that the code may be allocating many objects and consuming significant amount of memory.
Loading Code (
clr!ThePreStub is a helper function inside .NET framework that prepares the code to execute for the first time. This typically includes,
but not limited to, JIT (Just In Time) compilation. For each C# method,
clr!ThePreStub should be invoked at most once during the lifetime
of a process.
If you see
clr!ThePreStub takes significant amount of time for a request, it indicates that request is the first one that executes
that method, and the time for .NET framework runtime to load that method is significant. You can consider a warm-up process that executes
that portion of the code before your users access it, or consider running NGen on your assemblies.
Lock Contention (
clr!JITutil\_MonEnterWorker indicate the current thread is waiting for a lock to be released. This typically
shows up when executing a C# lock statement, invoking Monitor.Enter method, or invoking a method with MethodImplOptions.Synchronized
attribute. Lock contention typically happens when thread A acquires a lock, and thread B tries to acquire the same lock before thread
A releases it.
Loading code (
If the method name contains
[COLD], such as
mscorlib.ni![COLD]System.Reflection.CustomAttribute.IsDefined, it means that the .NET
framework runtime is executing code that is not optimized by profile-guided optimization for the first time. For each method, it should show up at most once during the lifetime of the process.
If loading code takes significant amount of time for a request, it indicates that request is the first one to execute the unoptimized portion of the method. You can consider a warm up process that executes that portion of the code before your users access it.
Send HTTP Request
Methods such as
HttpClient.Send indicate the code is waiting for a HTTP request to complete.
Method such as SqlCommand.Execute indicates the code is waiting for a database operation to complete.
AWAIT\_TIME indicates the code is waiting for another task to complete. This typically happens with C# 'await' statement. When the code
does a C# 'await', the thread unwinds and returns control to the thread-pool, and there is no thread that is blocked waiting for
the 'await' to finish. However, logically the thread that did the await is 'blocked' waiting for the operation to complete. The
AWAIT\_TIME indicates the blocked time waiting for the task to complete.
BLOCKED_TIME indicates the code is waiting for another resource to be available, such as waiting for a synchronization object,
waiting for a thread to be available, or waiting for a request to finish.
The CPU is busy executing the instructions.
The application is performing disk operations.
The application is performing network operations.
This is a visualization of how the INCLUSIVE samples collected for a node vary over time. The total range of the request
is divided into 32 time buckets and the inclusive samples for that node are accumulated into those 32 buckets. Each bucket is then represented as
a bar whose height represents a scaled value. For nodes marked
BLOCKED_TIME, or where there is an obvious relationship of consuming a resource (cpu, disk, thread),
the bar represents consuming one of those resources for the period of time of that bucket. For these metrics you can get greater than 100% by consuming multiple
resources. For example, if on average you use two CPUs over an interval then you get 200%.
Too many active profiling sessions
Currently you can enable profiler on at most 4 Azure Web Apps and deployment slots running on the same service plan. If you see the profiler web job reporting too many active profiling sessions, you need to move some Web Apps to a different service plan.
How can I know whether Application Insights profiler is running?
The profiler runs as a continuous web job in Web App. You can open the Web App resource in https://portal.azure.com and check "ApplicationInsightsProfiler" status in the WebJobs blade. If it isn't running, open Logs to find out more.
Why can't I find any stack examples even though the profiler is running?
Here are a few things you can check.
- Make sure your Web App Service Plan is Basic tier and above.
- Make sure your Web App has Application Insights SDK 2.2 Beta and above enabled.
- Make sure your Web App has the APPINSIGHTS_INSTRUMENTATIONKEY setting with the same instrumentation key used by Application Insights SDK.
- Make sure your Web App is running on .Net Framework 4.6.
- If it's an ASP.NET Core application, please also check the required dependencies.
After the profiler is started, there is a short warm-up period when the profiler actively collects several performance traces. After that, the profiler collects performance traces for two minutes in every hour.
I was using Azure Service Profiler. What happened to it?
When you enable Application Insights Profiler, Azure Service Profiler agent is disabled.
Double counting in parallel threads
In some cases the total time metric in the stack viewer is more than the actual duration of the request.
This can happen when there are two or more threads associated with a request, operating in parallel. The total thread time is then more than the elapsed time. In many cases one thread may be waiting on the other to complete. The viewer tries to detect this and omit the uninteresting wait, but errs on the side of showing too much rather than omitting what may be critical information.
When you see parallel threads in your traces, you need to determine which threads are waiting so you can determine the critical path for the request. In most cases, the thread that quickly goes into a wait state is simply waiting on the other threads. Concentrate on the others and ignore the time in the waiting threads.
No profiling data
If the data you are trying to view is older than a couple of weeks, try limiting your time filter and try again.
Check that proxies or a firewall have not blocked access to https://gateway.azureserviceprofiler.net.
Check that the Application Insights instrumentation key you are using in your app is the same as the Application Insights resource you've enabled profiling with. The key is usually in ApplicationInsights.config but can also be found in web.config or app.config.
Error report in the profiling viewer
File a support ticket from the portal. Please include the correlation ID from the error message.
Deployment error Directory Not Empty 'D:\home\site\wwwroot\App_Data\jobs'
If you are redeploying your web app to an App Services resource with the Profiler enabled, you might encounter the error similar to the following: Directory Not Empty 'D:\home\site\wwwroot\App_Data\jobs' This error will happen if you run Web Deploy from scripts or on VSTS Deployment Pipeline. The solution to this problem is to add the following additional deployment parameters to the Web Deploy task:
-skip:Directory='.*\\App_Data\\jobs\\continuous\\ApplicationInsightsProfiler.*' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs\\continuous$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data$'
This will delete the folder used by the App Insights Profiler and unblock the redeploy process. It will not affect the currently running Profiler instance.
When you configure the profiler, the following updates are made to the Web App's settings. You can do them yourself manually if your environment requires, for example, if your application runs in Azure App Service Environment (ASE):
- In the web app control blade, open Settings.
- Set ".Net Framework version" to v4.6.
- Set "Always On" to On.
- Add app setting "APPINSIGHTS_INSTRUMENTATIONKEY" and set the value to the same instrumentation key used by the SDK.
- Open Advanced Tools.
- Click "Go" to open the Kudu website.
- In the Kudu website, select "Site extensions".
- Install "Application Insights" from Gallery.
- Restart the web app.
ASP.NET Core Support
ASP.NET Core application needs to install Microsoft.ApplicationInsights.AspNetCore Nuget package 2.1.0-beta6 or higher to work with the Profiler. We no longer support the lower versions after 6/27/2017.