Log Analytics workspace data export in Azure Monitor (preview)
Log Analytics workspace data export in Azure Monitor allows you to continuously export data from selected tables in your Log Analytics workspace to an Azure storage account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces.
Data in Log Analytics is available for the retention period defined in your workspace and used in various experiences provided in Azure Monitor and other Azure services. There are more capabilities that can be met with data export:
- Comply with tamper protected store requirement -- data can't be altered in Log Analytics once ingested, but can be purged. Export to storage account set with immutability policies to keep data tamper protected.
- Integration with Azure services and other tools -- export to event hub in near-real-time to send data to your services and tools at it arrives to Azure Monitor.
- Keep audit and security data for long time at low cost -- export to storage account in the same region as your workspace, or replicate data to other storage accounts in other regions using any of the Azure Storage redundancy options including GRS and GZRS.
Once data export is configured in your Log Analytics workspace, any new data sent to the selected tables in the workspace is automatically exported in near-real-time to your storage account or to your event hub.
All data from included tables is exported without a filter. For example, when you configure a data export rule for SecurityEvent table, all data sent to the SecurityEvent table is exported starting from the configuration time.
Other export options
Log Analytics workspace data export continuously exports data from a Log Analytics workspace. Other options to export data for particular scenarios include the following:
- Scheduled export from a log query using a Logic App. This is similar to the data export feature but allows you to send filtered or aggregated data to Azure storage. This method though is subject to log query limits, see Archive data from Log Analytics workspace to Azure storage using Logic App.
- One time export to local machine using PowerShell script. See Invoke-AzOperationalInsightsQueryExport.
- All tables will be supported in export, but support is currently limited to those specified in the supported tables section below.
- The current custom log tables won’t be supported in export. A new version of custom log preview available February 2022, will be supported in export.
- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled.
- Destinations must be in the same region as the Log Analytics workspace.
- Tables names can be no longer than 60 characters when exporting to storage account and 47 characters to event hub. Tables with longer names will not be exported.
- Data export isn't supported in these regions currently:
- Korea South
- Jio India Central
- Government regions
Data export is optimized for moving large data volume to your destinations and in certain retry conditions, can include a fraction of duplicated records. The export operation to your destination could fail when ingress limits are reached, see details under Create or update data export rule. Export continues to retry for up to 30 minutes and if destination is unavailable to accept data, data will be discarded until the destination becomes available.
Billing for the Log Analytics Data Export feature is not enabled yet. View more details in pricing page.
Data export destination must be available before creating export rules in your workspace. Destinations don't have to be in the same subscription as your workspace. When using Azure Lighthouse, it is also possible send data to destinations in another Azure Active Directory tenant.
You need to have 'write' permissions to both workspace and destination to configure data export rule.
Don't use an existing storage account that has other, non-monitoring data stored in it to better control access to the data and prevent reaching storage ingress rate limit, failures, and latency.
To send data to immutable storage, set the immutable policy for the storage account as described in Set and manage immutability policies for Blob storage. You must follow all steps in this article, including enabling protected append blobs writes.
The storage account must be StorageV1 or above and in the same region as your workspace. If you need to replicate your data to other storage accounts in other regions, you can use any of the Azure Storage redundancy options including GRS and GZRS.
Data is sent to storage accounts as it reaches Azure Monitor and exported to destinations located in workspace region. A container is created for each table in the storage account with the name am- followed by the name of the table. For example, the table SecurityEvent would sent to a container named am-SecurityEvent.
Blobs are stored in 5-minute folders in the following path structure: WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/<resource-group>/providers/microsoft.operationalinsights/workspaces/<workspace>/y=<four-digit numeric year>/m=<two-digit numeric month>/d=<two-digit numeric day>/h=<two-digit 24-hour clock hour>/m=<two-digit 60-minute clock minute>/PT05M.json. Append blobs is limited to 50-K writes and could be reached, the naming pattern for blobs in such case would be PT05M_#.json*, where # is incremental blob count.
The storage account data format is in JSON lines, where each record is delimited by a newline, with no outer records array and no commas between JSON records.
You need to have 'write' permissions to both workspace and destination to configure data export rule. The shared access policy for the event hub namespace defines the permissions that the streaming mechanism has. Streaming to event hub requires Manage, Send, and Listen permissions. To update the export rule, you must have the ListKey permission on that Event Hubs authorization rule.
Don't use an existing event hub that has other, non-monitoring data stored in it to better control access to the data and prevent reaching event hub namespace ingress rate limit, failures, and latency.
Data is sent to your event hub as it reaches Azure Monitor and exported to destinations located in workspace region. You can create multiple export rules to the same event hub namespace by providing different
event hub name in rule.When
event hub name isn't provided, a default event hub is created for each table that you export with the name am- followed by the name of the table. For example, the table SecurityEvent would sent to an event hub named am-SecurityEvent. The number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10. When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
- 'Basic' event hub tier is limited--it supports lower event size and no Auto-inflate option to automatically scale up and increase the number of throughput units. Since data volume to your workspace increases over time and consequence event hub scaling is required, use 'Standard', 'Premium' or 'Dedicated' event hub tiers with Auto-inflate feature enabled. See Automatically scale up Azure Event Hubs throughput units.
- Data export can't reach event hub resources when virtual networks are enabled. You have to enable the Allow trusted Microsoft services to bypass this firewall setting in event hub, to grant access to your Event Hubs resources.
Enable data export
The following steps must be performed to enable Log Analytics data export. See the following sections for more details on each.
- Register resource provider.
- Allow trusted Microsoft services.
- Create one or more data export rules that define the tables to export and their destination.
Register resource provider
Azure resource provider Microsoft.Insights needs to be registered in your subscription to enable Log Analytics data export.
This resource provider is probably already registered for most Azure Monitor users. To verify, go to Subscriptions in the Azure portal. Select your subscription and then click Resource providers in the Settings section of the menu. Locate Microsoft.Insights. If its status is Registered, then it's already registered. If not, click Register to register it.
You can also use any of the available methods to register a resource provider as described in Azure resource providers and types. Following is a sample command using CLI:
az provider register --namespace 'Microsoft.insights'
Following is a sample command using PowerShell:
Register-AzResourceProvider -ProviderNamespace Microsoft.insights
Allow trusted Microsoft services
If you have configured your storage account to allow access from selected networks, you need to add an exception to allow Azure Monitor to write to the account. From Firewalls and virtual networks for your storage account, select Allow trusted Microsoft services to access this storage account.
Monitoring storage account
Use separate storage account for export
Configure alert on the metric below:
Scope Metric Namespace Metric Aggregation Threshold storage-name Account Ingress Sum 80% of max ingress per alert evaluation period. For example: limit is 60 Gbps for general-purpose v2 in West US. Threshold is 14,400 Gb per 5-minutes evaluation period
Alert remediation actions
- Use separate storage account for export that isn't shared with non-monitoring data.
- Azure Storage standard accounts support higher ingress limit by request. To request an increase, contact Azure Support.
- Split tables between more storage accounts.
Monitoring event hub
Configure alerts on the metrics below:
Scope Metric Namespace Metric Aggregation Threshold namespaces-name Event Hub standard metrics Incoming bytes Sum 80% of max ingress per alert evaluation period. For example, limit is 1 MB/s per unit (TU or PU) and five units used. Threshold is 1200 MB per 5-minutes evaluation period namespaces-name Event Hub standard metrics Incoming requests Count 80% of max events per alert evaluation period. For example, limit is 1000/s per unit (TU or PU) and five units used. Threshold is 1200000 per 5-minutes evaluation period namespaces-name Event Hub standard metrics Quota Exceeded Errors Count Between 1% of request. For example, requests per 5 minutes is 600000. Threshold is 6000 per 5-minutes evaluation period
Alert remediation actions
- Use separate event hub namespace for export that isn't shared with non-monitoring data.
- Configure Auto-inflate feature to automatically scale up and increase the number of throughput units to meet usage needs
- Verify increase of throughput units to accommodate data volume
- Split tables between more namespaces
- Use 'Premium' or 'Dedicated' tiers for higher throughput
Create or update data export rule
Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. You can use the same storage account and event hub namespace in multiple rules in the same workspace. When event hub names are provided in rules, they must be unique in workspace.
- You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
- The current custom log tables won’t be supported in export. The next generation of custom log available early 2022 in preview is supported.
- Export to storage account - a separate container is created in storage account for each table.
- Export to event hub - if event hub name isn't provided, a separate event hub is created for each table. The number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10. When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
In the Log Analytics workspace menu in the Azure portal, select Data Export from the Settings section and click New export rule from the top of the middle pane.
Follow the steps, then click Create.
View data export rule configuration
In the Log Analytics workspace menu in the Azure portal, select Data Export from the Settings section.
Click a rule for configuration view.
Disable or update an export rule
Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. In the Log Analytics workspace menu in the Azure portal, select Data Export from the Settings section and click the status toggle to disable or enable export rule.
Delete an export rule
In the Log Analytics workspace menu in the Azure portal, select Data Export from the Settings section, then click the ellipsis to the right of the rule and click Delete.
View all data export rules in a workspace
In the Log Analytics workspace menu in the Azure portal, select Data Export from the Settings section to view all export rules in workspace.
If the data export rule includes an unsupported table, the configuration will succeed, but no data will be exported for that table. If the table is later supported, then its data will be exported at that time.
Supported tables are currently limited to those specified below. All data from the table will be exported unless limitations are specified. This list is updated as more tables are added.
|ConfigurationData||Partial support – some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently.|
|Event||Partial support – data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isn’t supported in export.2|
|InsightsMetrics||Partial support – some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently.|
|OfficeActivity||Partial support in government clouds – some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently.|
|Operation||Partial support – some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently.|
|Perf||Partial support – only windows perf data is currently supported. The Linux perf data is missing in export currently.|
|SecurityEvent||Partial support – data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isn’t supported in export.|
|Syslog||Partial support – data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isn’t supported in export.|