您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

使用 Azure Monitor 日志管理使用情况和成本Manage usage and costs with Azure Monitor Logs

备注

本文介绍如何了解和控制 Azure Monitor 日志的成本。This article describes how to understand and control your costs for Azure Monitor Logs. 相关文章监视使用情况及预估成本介绍了如何针对不同的定价模型查看多个 Azure 监视功能的使用情况及预估成本。A related article, Monitoring usage and estimated costs describes how to view usage and estimated costs across multiple Azure monitoring features for different pricing models. 本文中显示的所有价格和成本仅用作示例。All prices and costs shown in this article are for example purposes only.

Azure Monitor 日志用于调整和支持来自任何源的巨量数据的每日收集、索引和存储,这些源部署在企业或 Azure 中。Azure Monitor Logs is designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. 尽管这可能是组织的主要驱动力,但成本效益最终是基本驱动力。While this may be a primary driver for your organization, cost-efficiency is ultimately the underlying driver. 为此,必须了解 Log Analytics 工作区的成本不仅仅是基于收集的数据量,而且也取决于所选的计划,以及连接源生成的数据的存储时间长短。To that end, it's important to understand that the cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your connected sources.

本文介绍如何主动监视引入的数据量和存储增长,以及定义限制来控制这些关联的成本。In this article we review how you can proactively monitor ingested data volume and storage growth, and define limits to control those associated costs.

定价模型Pricing model

Log Analytics 的默认定价是基于引入的数据量的即用即付模型,还可以选择用于更长的数据保留。The default pricing for Log Analytics is a Pay-As-You-Go model based on data volume ingested and optionally for longer data retention. 数据量是以 GB(10^9 字节)为单位存储的数据大小来度量的。Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). 每个 Log Analytics 工作区作为独立服务计费,并在 Azure 订阅的账单中产生相应费用。Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. 根据以下因素,数据引入量有时会很大:The amount of data ingestion can be considerable depending on the following factors:

  • 已启用的管理解决方案的数量及其配置Number of management solutions enabled and their configuration
  • 受监视的 VM 数量Number of VMs monitored
  • 从每个受监视 VM 收集的数据类型Type of data collected from each monitored VM

除了即用即付模型外,Log Analytics 还具有产能预留层,与即用即付价格相比,使你能够节省多达 25% 的成本。In addition to the Pay-As-You-Go model, Log Analytics has Capacity Reservation tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. 产能预留定价使你可以购买起价为 100 GB/天的保留。The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. 将按即用即付费率对超出预留级别的任何使用量进行计费。Any usage above the reservation level will be billed at the Pay-As-You-Go rate. 产能预留层具有 31 天的承诺期。The Capacity Reservation tiers have a 31-day commitment period. 在承诺期内,你可以更改为更高级别的产能预留层(这将重启 31 天的承诺期),但你不能返回到即用即付或较低的产能预留层,直到承诺期结束。During the commitment period, you can change to a higher level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. 产能预留层按天计费。Billing for the Capacity Reservation tiers is done on a daily basis. 详细了解有关 Log Analytics 即用即付和产能预留定价。Learn more about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.

在所有定价层中,事件的数据大小都是根据存储在此事件 Log Analytics 中的属性的字符串表示形式进行计算(无论是从代理发送数据还是在引入过程中添加数据)。In all pricing tiers, an event's data size is calculated from a string representation of the properties which are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. 这包括在收集数据并随后将数据存储在 Log Analytics 中时添加的任何自定义字段This includes any custom fields that are added as data is collected and then stored in Log Analytics. 一些对于所有数据类型都通用的属性(包括一些 Log Analytics 标准属性)均未包括在事件大小的计算中。Several properties common to all data types, including some Log Analytics Standard Properties, are excluded in the calculation of the event size. 这包括 _ResourceId_ItemId_IsBillable_BilledSizeTypeThis includes _ResourceId, _ItemId, _IsBillable, _BilledSize and Type. 存储在 Log Analytics 中的所有其他属性均包括在事件大小的计算中。All other properties stored in Log Analytics are included in the calculation of the event size. 某些数据类型完全免收数据引入费用,例如 AzureActivity、检测信号和使用情况类型。Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. 若要确定事件是否被排除在数据引入计费之外,可使用 _IsBillable 属性,如下所示To determine whether an event was excluded from billing for data ingestion, you can use the _IsBillable property as shown below. 使用情况以 GB(1.0E9 字节)为单位进行报告。Usage is reported in GB (1.0E9 bytes).

另请注意,某些解决方案(如 Azure 安全中心Azure Sentinel配置管理)具有其自己的定价模型。Also, note that some solutions, such as Azure Security Center, Azure Sentinel and Configuration management have their own pricing models.

Log Analytics 专用群集Log Analytics Dedicated Clusters

Log Analytics 专用群集是收集到单个托管 Azure 数据资源管理器群集中的工作区集合,用于支持高级方案,例如客户托管的密钥Log Analytics Dedicated Clusters are collections of workspaces into a single managed Azure Data Explorer cluster to support advanced scenarios such as Customer-Managed Keys. Log Analytics 专用群集使用产能预留定价模型,该模型必须配置为至少 1000 GB/天。Log Analytics Dedicated Clusters use a Capacity Reservation pricing model which must be configured to at least 1000 GB/day. 与即用即付定价相比,此产能级别有 25% 的折扣。This capacity level has a 25% discount compared to Pay-As-You-Go pricing. 将按即用即付费率对超出预留级别的任何使用量进行计费。Any usage above the reservation level will be billed at the Pay-As-You-Go rate. 在增加预留级别后,群集产能预留具有 31 天的承诺期。The cluster Capacity Reservation has a 31-day commitment period after the reservation level is increased. 套餐周期期间,不能减少产能预留级别,但可以随时增加。During the commitment period the capacity reservation level cannot be reduced, but it can be increased at any time. 当工作区与群集关联时,将使用配置的产能预留级别在群集级别上完成对这些工作区的数据引入计费。When workspaces are associated to a cluster, the data ingestion billing for those workspaces are done at the cluster level using the configured capacity reservation level. 详细了解创建 Log Analytics 群集将工作区与其关联Learn more about creating a Log Analytics Clusters and associating workspaces to it. 有关产能预留的定价信息,请参阅 Azure Monitor 定价页Capacity Reservation pricing information is available at the Azure Monitor pricing page.

群集产能预留级别将使用 Sku 下的 Capacity 参数以编程方式通过 Azure 资源管理器进行配置。The cluster capacity reservation level is configured via programmatically with Azure Resource Manager using the Capacity parameter under Sku. Capacity 指定 GB 为单位,并且值可以为 1000 GB/天或更大,增量为 100 GB/天。The Capacity is specified in units of GB and can have values of 1000 GB/day or more in increments of 100 GB/day. 详细信息请参阅 Azure Monitor 客户管理的密钥This is detailed at Azure Monitor customer-managed key. 如果群集需要的预留超过 2000 GB/天,请通过 LAIngestionRate@microsoft.com 联系我们。If your cluster needs a reservation above 2000 GB/day contact us at LAIngestionRate@microsoft.com.

对于群集上的使用情况,有两种计费模式。There are two modes of billing for usage on a cluster. 配置群集时,可通过 billingType 参数指定这些计费模式。These can be specified by the billingType parameter when configuring your cluster. 这两种模式是:The two modes are:

  1. 群集:在此情况下(其为默认情况),引入数据的计费在群集级别完成。Cluster: in this case (which is the default), billing for ingested data is done at the cluster level. 将聚合与群集关联的每个工作区中的引入数据数量,以计算群集的每日账单。The ingested data quantities from each workspace associated to a cluster is aggregated to calculate the daily bill for the cluster. 请注意,在跨群集中所有工作区的聚合数据聚合之前,将在工作区级别应用基于 Azure 安全中心的按节点分配。Note that per-node allocations from Azure Security Center are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster.

  2. 工作区:群集的产能预留成本按比例分配给群集中的工作区(在考虑了为每个工作区从 Azure 安全中心进行每节点分配之后。)如果某一天引入到工作区中的总数据量低于产能预留,则每个工作区都按有效的每 GB 产能预留费率对其引入数据计费,方法是对引入数据按产能预留的一部分进行计费,产能预留的未使用部分计费到群集资源。Workspaces: the Capacity Reservation costs for your Cluster are attributed proportionately to the workspaces in the Cluster (after accounting for per-node allocations from Azure Security Center for each workspace.) If the total data volume ingested into a workspace for a day is less than the Capacity Reservation, then each workspace is billed for its ingested data at the effective per-GB Capacity Reservation rate by billing them a fraction of the Capacity Reservation, and the unused part of the Capacity Reservation is billed to the cluster resource. 如果某一天引入到工作区中的总数据量高于产能预留,则每个工作区将基于其当天引入数据的一部分按产能预留的一部分进行计费,且每个工作区都将对高于产能预留的引入数据的一部分进行计费。If the total data volume ingested into a workspace for a day is more than the Capacity Reservation, then each workspace is billed for a fraction of the Capacity Reservation based on it’s fraction of the ingested data that day, and each workspace for a fraction of the ingested data above the Capacity Reservation. 如果某一天引入到工作区中的总数据量超出产能预留,则不会计费到群集资源。There is nothing billed to the cluster resource if the total data volume ingested into a workspace for a day is over the Capacity Reservation.

在群集计费选项中,数据保留按工作区计费。In cluster billing options, data retention is billed at per-workspace. 请注意,群集计费在创建群集时开始,无论工作区是否已关联到群集。Note that cluster billing starts when the cluster is created, regardless of whether workspaces have been associated to the cluster. 另请注意,与群集关联的工作区不再具有定价层。Also, note that workspaces associated to a cluster no longer have a pricing tier.

估计管理环境的成本Estimating the costs to manage your environment

如果尚未使用 Azure Monitor 日志,可以使用 Azure Monitor 定价计算器来估计使用 Log Analytics 的成本。If you're not yet using Azure Monitor Logs, you can use the Azure Monitor pricing calculator to estimate the cost of using Log Analytics. 首先在搜索框中输入“Azure Monitor”,然后单击生成的 Azure Monitor 磁贴。Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. 将页面向下滚动到“Azure Monitor”,然后从“类型”下拉列表中选择“Log Analytics”。Scroll down the page to Azure Monitor, and select Log Analytics from the Type dropdown. 可在此处输入 VM 数和要从每个 VM 收集的 GB 数。Here you can enter the number of VMs and the GB of data you expect to collect from each VM. 通常,每月从典型的 Azure VM 可引入 1 到 3 GB 数据。Typically 1 to 3 GB of data month is ingested from a typical Azure VM. 如果已在评估 Azure Monitor 日志,则可以使用自己环境中的数据统计信息。If you're already evaluating Azure Monitor Logs already, you can use your data statistics from your own environment. 请参阅以下内容,了解如何确定所监视的 VM 的数量工作区引入的数据量See below for how to determine the number of monitored VMs and the volume of data your workspace is ingesting.

了解你的使用情况和估计成本Understand your usage and estimate costs

如果你现在正在使用 Azure Monitor 日志,根据最近的使用模式很容易理解可能的成本。If you're using Azure Monitor Logs now, it's easy to understand what the costs are likely be based on recent usage patterns. 若要执行此操作,请使用“Log Analytics 使用情况和预估成本”查看和分析数据使用情况。To do this, use Log Analytics Usage and Estimated Costs to review and analyze data usage. 这显示每个解决方案收集的数据量、保留的数据量,并根据引入的数据量和已包含量之外的其他保留量来估算成本。This shows how much data is collected by each solution, how much data is being retained and an estimate of your costs based on the amount of data ingested and any additional retention beyond the included amount.

使用情况和预估成本

若要更详细地探索数据,请单击“使用情况和预估成本”页上任一图表右上侧的图标。To explore your data in more detail, click on the icon at the top right of either of the charts on the Usage and Estimated Costs page. 现在可以使用此查询来探索有关使用情况的更多详细信息。Now you can work with this query to explore more details of your usage.

日志视图

从“使用情况和估计成本”页面,可以查看当月的数据量。From the Usage and Estimated Costs page you can review your data volume for the month. 这包括 Log Analytics 工作区中收到和保留的所有可计费数据。This includes all the billable data received and retained in your Log Analytics workspace.

Log Analytics 费用将添加到 Azure 帐单。Log Analytics charges are added to your Azure bill. 可以在 Azure 门户的“计费”部分或在 Azure 计费门户中查看 Azure 账单详细信息。You can see details of your Azure bill under the Billing section of the Azure portal or in the Azure Billing Portal.

查看 Azure 账单上的 Log Analytics 使用情况Viewing Log Analytics usage on your Azure bill

Azure 在 Azure 成本管理和计费中心提供了大量实用功能。Azure provides a great deal of useful functionality in the Azure Cost Management + Billing hub. 例如,使用“成本分析”功能可以查看 Azure 资源的开支。For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. 首先,将 "资源类型" (的筛选器添加到 Log Analytics 的 microsoft.operationalinsights/工作区,并将 Log Analytics 群集的 microsoft.operationalinsights/群集添加到群集) 将允许你跟踪 Log Analytics 支出。First, add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters) will allow you to track your Log Analytics spend. 然后,对于“分组依据”,选择“计量类别”或“计量”。Then for "Group by" select "Meter category" or "Meter". 请注意,其他服务(例如 Azure 安全中心和 Azure Sentinel)还会根据 Log Analytics 工作区资源对其使用情况进行计费。Note that other services such as Azure Security Center and Azure Sentinel also bill their usage against Log Analytics workspace resources. 若要查看服务名称映射,可以选择表视图而不是图表。To see the mapping to Service name, you can select the Table view instead of a chart.

通过在 Azure 门户中下载使用情况信息,可以更好地了解你的使用情况。More understanding of your usage can be gained by downloading your usage from the Azure portal. 在下载的电子表格中,你可以查看每天每个 Azure 资源(例如 Log Analytics 工作区)的使用情况。In the downloaded spreadsheet you can see usage per Azure resource (e.g. Log Analytics workspace) per day. 在此 Excel 电子表格中,可以首先基于“计量类别”列进行筛选以显示“Log Analytics”、“Insights and Analytics”(由某些旧定价层使用)和“Azure Monitor”(由“产能预留”定价层使用),借此查找 Log Analytics 工作区中的使用情况,然后在“实例 ID”列上添加一个筛选器:“包含工作区”或“包含群集”(后者包括 Log Analytics 群集的使用情况)。In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insights and Analytics" (used by some of the legacy pricing tiers) and "Azure Monitor" (used by Capacity Reservation pricing tiers), and then adding a filter on the "Instance ID" column which is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). 使用情况显示在“已耗用数量”列中,每个条目的单位显示在“度量单位”列中。The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. 还有更多详细信息可帮助你了解 Microsoft Azure 账单More details are available to help you understand your Microsoft Azure bill.

更改定价层Changing pricing tier

若要更改工作区的 Log Analytics 定价层,请执行以下操作:To change the Log Analytics pricing tier of your workspace,

  1. 在 Azure 门户中,打开工作区的“使用情况和预估成本”,你会在其中看到此工作区可用的每个定价层的列表。In the Azure portal, open Usage and estimated costs from your workspace where you'll see a list of each of the pricing tiers available to this workspace.

  2. 查看每个定价层的预估成本。Review the estimated costs for each of the pricing tiers. 此估算基于最近 31 天的使用情况,因此,此成本估算依赖于最近 31 天有代表性的典型使用情况。This estimate is based on the last 31 days of usage, so this cost estimate relies on the last 31 days being representative of your typical usage. 在下面的示例中,你可以看到,根据最近 31 天的数据模式,此工作区在即用即付层 (#1) 中的成本与产能预留层的 100 GB/天相比要少多少。In the example below you can see how, based on the data patterns from the last 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day Capacity Reservation tier (#2).

    定价层

  3. 查看基于最近 31 天的使用情况的预估成本后,如果决定更改定价层,请单击“选择”。After reviewing the estimated costs based on the last 31 days of usage, if you decide to change the pricing tier, click Select.

你也可以使用 sku 参数(Azure 资源管理器模板中的 pricingTier通过 Azure 资源管理器设置定价层You can also set the pricing tier via Azure Resource Manager using the sku parameter (pricingTier in the Azure Resource Manager template).

旧版定价层Legacy pricing tiers

在 2018 年 4 月 2 日之前拥有 Log Analytics 工作区或 Application Insights 资源的订阅,或与 2019 年 2 月 1 日之前开始的企业协议链接的订阅,将继续有权使用旧定价层:免费定价层、独立定价层(每 GB)和按节点定价层 (OMS)。 Subscriptions who had a Log Analytics workspace or Application Insights resource in it before April 2, 2018, or are linked to an Enterprise Agreement that started prior to February 1, 2019, will continue to have access to use the legacy pricing tiers: Free, Standalone (Per GB) and Per Node (OMS). 免费定价层中的工作区将每日数据引入限制为 500 MB(Azure 安全中心收集的安全数据类型除外),数据保留期限制为 7 天。Workspaces in the Free pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by Azure Security Center) and the data retention is limited to 7 days. 免费定价层仅用于评估目的。The Free pricing tier is intended only for evaluation purposes. 独立定价层或按节点定价层中的工作区具有用户可配置的 30 至 730 天的保留期。Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days.

独立定价层上的使用情况按引入数据量计费。Usage on the Standalone pricing tier is billed by the ingested data volume. 该使用情况在 Log Analytics 服务中进行报告,计量名为“分析的数据”。It is reported in the Log Analytics service and the meter is named "Data Analyzed".

按节点定价层按小时粒度对每个受监视的 VM(节点)收费。The Per Node pricing tier charges per monitored VM (node) on an hour granularity. 对于每个受监视的节点,每天为工作区分配 500 MB 的不计费数据。For each monitored node, the workspace is allocated 500 MB of data per day that is not billed. 此分配按小时粒度计算,并且每天在工作区级别进行汇总。This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. 超出每日数据分配聚合的引入数据在数据超额时按 GB 计费。Data ingested above the aggregate daily data allocation is billed per GB as data overage. 请注意,如果工作区位于“按节点”定价层中,则在账单上,对于 Log Analytics 使用情况,该服务将是 Insight and Analytics。Note that on your bill, the service will be Insight and Analytics for Log Analytics usage if the workspace is in the Per Node pricing tier. 使用情况按三个计量进行报告:Usage is reported on three meters:

  1. 节点:这是以节点*月为单位的受监视节点 (VM) 数量的使用情况。Node: this is usage for the number of monitored nodes (VMs) in units of node*months.
  2. 每个节点的数据超额:这是超出聚合数据分配的所引入数据的 GB 数。Data Overage per Node: this is the number of GB of data ingested in excess of the aggregated data allocation.
  3. 每个节点包含的数据:这是聚合数据分配所涵盖的引入数据量。Data Included per Node: this is the amount of ingested data that was covered by the aggregated data allocation. 当工作区在所有定价层中时,也使用此计量来显示 Azure 安全中心涵盖的数据量。This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Azure Security Center.

提示

如果你的工作区有权访问“按节点”定价层,但你想知道在即用即付层中成本是否更低,则可以 使用以下查询轻松获取建议。If your workspace has access to the Per Node pricing tier, but you're wondering whether it would be cost less in a Pay-As-You-Go tier, you can use the query below to easily get a recommendation.

2016 年 4 月之前创建的工作区还可以访问原始“标准”定价层和“高级”定价层,它们分别有 30 天和 365 天的固定数据保留期。 Workspaces created prior to April 2016 can also access the original Standard and Premium pricing tiers which have fixed data retention of 30 and 365 days respectively. 无法在 标准高级 定价层中创建新的工作区,并且如果将工作区移出这些层,则无法将其移回。New workspaces cannot be created in the Standard or Premium pricing tiers, and if a workspace is moved out of these tiers, it cannot be moved back. 这些旧版层的数据引入计量称为“分析的数据”。Data ingestion meters for these legacy tiers are called "Data analyzed".

在使用旧 Log Analytics 层和 Azure 安全中心的使用情况计费方式之间还有一些行为。There are also some behaviors between the use of legacy Log Analytics tiers and how usage is billed for Azure Security Center.

  1. 如果工作区位于旧版标准或高级层中,则仅针对 Log Analytics 数据引入而不是按节点对 Azure 安全中心进行计费。If the workspace is in the legacy Standard or Premium tier, Azure Security Center will be billed only for Log Analytics data ingestion, not per node.
  2. 如果工作区位于旧版按节点层,将使用当前 Azure 安全中心基于节点的定价模型对 Azure 安全中心进行计费。If the workspace is in the legacy Per Node tier, Azure Security Center will be billed using the current Azure Security Center node-based pricing model.
  3. 在其他定价层(包括产能预留)中,如果 Azure 安全中心在 2017 年 6 月 19 日之前已启用,将仅针对 Log Analytics 数据引入对 Azure 安全中心进行计费。In other pricing tiers (including Capacity Reservations), if Azure Security Center was enabled before June 19, 2017, Azure Security Center will be billed only for Log Analytics data ingestion. 否则,将使用当前 Azure 安全中心基于节点的定价模型对 Azure 安全中心进行计费。Otherwise Azure Security Center will be billed using the current Azure Security Center node-based pricing model.

有关定价层限制的更多详细信息,请参阅 Azure 订阅和服务限制、配额和约束More details of pricing tier limitations are available at Azure subscription and service limits, quotas, and constraints.

旧版定价层均没有基于区域的定价。None of the legacy pricing tiers has regional-based pricing.

备注

若要使用通过购买用于 System Center 的 OMS E1 套件、OMS E2 套件或 OMS 附加产品所获得的权利,请选择 Log Analytics 的“按节点”定价层。To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite or OMS Add-On for System Center, choose the Log Analytics Per Node pricing tier.

Log Analytics 和安全中心Log Analytics and Security Center

Azure 安全中心计费与 Log Analytics 计费密切相关。Azure Security Center billing is closely tied to Log Analytics billing. 当未在工作区上运行更新管理解决方案或启用解决方案目标设定时,安全中心会针对一组安全数据类型(WindowsEvent、SecurityAlert、SecurityBaseline、SecurityBaselineSummary、SecurityDetection、SecurityEvent、WindowsFirewall、MaliciousIPCommunication、LinuxAuditLog、SysmonEvent、ProtectionStatus)以及 Update 和 UpdateSummary 数据类型提供 500 MB/节点/天的分配。Security Center provides 500 MB/node/day allocation against a set of security data types (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution is not running on the workspace or solution targeting is enabled. 如果工作区位于旧版按节点定价层中,则将合并安全中心和 Log Analytics 分配,并将其共同应用于所有可计费的引入数据。If the workspace is in the legacy Per Node pricing tier, the Security Center and Log Analytics allocations are combined and applied jointly to all billable ingested data.

更改数据保留期Change the data retention period

以下步骤说明如何配置日志数据在工作区中的保留期限。The following steps describe how to configure how long log data is kept by in your workspace. 工作区级别的数据保留期可以配置为30到730天 (2 年内所有工作区) ,除非他们使用旧版免费定价层。详细了解更 长的数据保留期。Data retention at the workspace level can be configured from 30 to 730 days (2 years) for all workspaces unless they are using the legacy Free pricing tier.Learn more about pricing for longer data retention. 单个数据类型的保留期最小可设置为4天。Retention for individual data types can be set as low as 4 days.

工作区级别默认保持期Workspace level default retention

若要设置工作区的默认保留期,请执行以下操作:To set the default retention for your workspace,

  1. 在 Azure 门户中,从工作区的左窗格中,选择“使用情况和预估成本”。In the Azure portal, from your workspace, select Usage and estimated costs from the left pane.

  2. 在“使用情况和预估成本”页面上,单击页面顶部的“数据保留” 。On the Usage and estimated costs page, click Data Retention from the top of the page.

  3. 在窗格中,移动滑块以增加或减少天数,然后单击“确定”。On the pane, move the slider to increase or decrease the number of days and then click OK. 如果位于“免费”层,则不能修改数据保留期,需要升级到付费层才能控制这一项设置。If you are on the free tier, you will not be able to modify the data retention period and you need to upgrade to the paid tier in order to control this setting.

    更改工作区数据保留设置

如果保留期缩短,则在旧数据(早于新保留期设置的数据)删除之前会有几天宽限期。When the retention is lowered, there is a several day grace period before the data older than the new retention setting is removed.

还可以使用 retentionInDays 参数通过 Azure 资源管理器设置保留期。The retention can also be set via Azure Resource Manager using the retentionInDays parameter. 如果将数据保留期设置为 30 天,则可以使用 immediatePurgeDataOn30Days 参数对旧数据触发“立即清除”操作(消除几天的宽限期)。When you set the data retention to 30 days, you can trigger an immediate purge of older data using the immediatePurgeDataOn30Days parameter (eliminating the several day grace period). 这对于合规性相关场景可能很有用,在此类场景中,必须立即删除数据。This may be useful for compliance-related scenarios where immediate data removal is imperative. 此立即清除功能仅通过 Azure 资源管理器公开。This immediate purge functionality is only exposed via Azure Resource Manager.

保留期为 30 天的工作区实际上可能会保留 31 天的数据。Workspaces with 30 days retention may actually retain data for 31 days. 如果要求只保留 30 天的数据,请使用 Azure 资源管理器将保留期设置为 30 天,并使用 immediatePurgeDataOn30Days 参数。If it is imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the immediatePurgeDataOn30Days parameter.

默认情况下,两种数据类型(UsageAzureActivity)将保留至少 90 天,并且对于这 90 天的保留期,不收取任何费用。Two data types -- Usage and AzureActivity -- are retained for a minimum of 90 days by default, and there is no charge for for this 90 day retention. 如果工作区的保留期超过 90 天,则这些数据类型的保留期也将增加。If the workspace retention is increased above 90 days, the retention of these data types will also be increased. 这些数据类型也不收取数据引入费用。These data types are also free from data ingestion charges.

默认情况下,基于工作区的 Application Insights 资源(AppAvailabilityResultsAppBrowserTimingsAppDependenciesAppExceptionsAppEventsAppMetricsAppPageViewsAppPerformanceCountersAppRequestsAppSystemEventsAppTraces)中的数据类型也保留 90 天,并且对于这 90 天的保留期,不收取任何费用。Data types from workspace-based Application Insights resources (AppAvailabilityResults, AppBrowserTimings, AppDependencies, AppExceptions, AppEvents, AppMetrics, AppPageViews, AppPerformanceCounters, AppRequests, AppSystemEvents and AppTraces) are also retained for 90 days by default, and there is no charge for for this 90 day retention. 可以使用按数据类型保留功能调整其保留期。Their retention can be adjust using the retention by data type functionality.

请注意,Log Analytics 清除 API 不会影响保留计费,且适用于极少数情况。Note that the Log Analytics purge API does not affect retention billing and is intended to be used for very limited cases. 若要减少保留费用,必须为工作区或特定数据类型减少保持期。To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types.

按数据类型保留Retention by data type

还可以为每个数据类型指定不同的保留设置,该数据类型为4到730天 (除了旧的免费定价层中的工作区) ,替代工作区级默认保留期。It is also possible to specify different retention settings for individual data types from 4 to 730 days (except for workspaces in the legacy Free pricing tier) that override the workspace level default retention. 每个数据类型都是工作区的子资源。Each data type is a sub-resource of the workspace. 例如,可以在 Azure 资源管理器中对 SecurityEvent 表进行寻址,如下所示:For instance the SecurityEvent table can be addressed in Azure Resource Manager as:

/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent

请注意,数据类型(表)区分大小写。Note that the data type (table) is case sensitive. 若要获取特定数据类型(在此示例中为 SecurityEvent)的当前每数据类型保留期设置,请使用:To get the current per-data-type retention settings of a particular data type (in this example SecurityEvent), use:

    GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview

备注

仅当针对某数据类型显式设置了保留期时,才会返回该数据类型的保留期。Retention is only returned for a data type if the retention has been explicitly set for it. 如果数据类型尚未显式设置保留期(因而继承工作区保留期),则此调用将不会返回任何内容。Data types which have not had retention explicitly set (and thus inherit the workspace retention) will not return anything from this call.

若要获取工作区中所有数据类型(已设置每个数据类型保留期)的当前每个数据类型保留期设置,只需省略特定的数据类型,例如:To get the current per-data-type retention settings for all data types in your workspace that have had their per-data-type retention set, just omit the specific data type, for example:

    GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2017-04-26-preview

若要将特定数据类型(在本例中为 SecurityEvent)的保留期设置为 730 天,请执行以下操作:To set the retention of a particular data type (in this example SecurityEvent) to 730 days, do

    PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview
    {
        "properties": 
        {
            "retentionInDays": 730
        }
    }

retentionInDays 的有效值为 30 到 730。Valid values for retentionInDays are from 30 through 730.

UsageAzureActivity 数据类型不能设置自定义保留期。The Usage and AzureActivity data types cannot be set with custom retention. 它们将采用默认工作区保留期的最大值或 90 天。They will take on the maximum of the default workspace retention or 90 days.

用于直接连接到 Azure 资源管理器以按数据类型设置保留期的极佳工具是 OSS 工具 ARMclientA great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool ARMclient. David EbboDaniel Bowbyes 撰写的文章中了解有关 ARMclient 的详细信息。Learn more about ARMclient from articles by David Ebbo and Daniel Bowbyes. 下面是一个使用 ARMClient 的示例,为 SecurityEvent 数据设置 730 天的保留期:Here's an example using ARMClient, setting SecurityEvent data to a 730 day retention:

armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview "{properties: {retentionInDays: 730}}"

提示

设置各个数据类型的保留期可用于降低数据保留成本。Setting retention on individual data types can be used to reduce your costs for data retention. 对于从 2019 年 10 月(发布此功能时)开始收集的数据,减少某些数据类型的保留期可降低一段时间内的保留成本。For data collected starting in October 2019 (when this feature was released), reducing the retention for some data types can reduce your retention cost over time. 对于较早收集的数据,为单个类型设置较低的保留期不会影响你的保留成本。For data collected earlier, setting a lower retention for an individual type will not affect your retention costs.

管理每日最大数据量Manage your maximum daily data volume

可以配置工作区的每日上限并限制每日引入量,但请谨慎设置,因为目标是避免达到每日限制。You can configure a daily cap and limit the daily ingestion for your workspace, but use care as your goal should not be to hit the daily limit. 否则,会丢失该天剩余时间的数据,这可能会影响其功能依赖于工作区中提供的最新数据的其他 Azure 服务和解决方案。Otherwise, you lose data for the remainder of the day, which can impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. 因此,需要具有在支持 IT 服务的资源的运行状况受到影响时监视和接收警报的能力。As a result, your ability to observe and receive alerts when the health conditions of resources supporting IT services are impacted. 每日上限旨在用作一种调控受管理资源数据量意外增长并使其保留在限制范围内,或者限制工作区产生计划外费用的方式。The daily cap is intended to be used as a way to manage an unexpected increase in data volume from your managed resources and stay within your limit, or when you want to limit unplanned charges for your workspace. 设置的每日上限不应是工作区每日可达到的量。It is not appropriate to set a daily cap so that it is met each day on a workspace.

每个工作区在一天的不同小时均应用其每日上限。Each workspace has its daily cap applied on a different hour of the day. 重置时间显示在“每日上限”页中(见下文)。The reset hour is shown in the Daily Cap page (see below). 无法配置此重置时间。This reset hour cannot be configured.

达到每日限制后,在当天的剩余时间会停止收集应计费数据类型。Soon after the daily limit is reached, the collection of billable data types stops for the rest of the day. 应用每日上限时的固有延迟意味着应用上限不会精确到指定的每日上限级别。Latency inherent in applying the daily cap means that the cap is not applied at precisely the specified daily cap level. 选定 Log Analytics 工作区的页面顶部会显示警告横幅,同时会将一个操作事件发送到“LogManagement”类别下的“操作”表。A warning banner appears across the top of the page for the selected Log Analytics workspace and an operation event is sent to the Operation table under LogManagement category. 在“每日限制设置时间”定义的重置时间过后,数据收集将会恢复。Data collection resumes after the reset time defined under Daily limit will be set at. 我们建议基于此操作事件定义一个预警规则,并将其配置为在达到每日数据限制时发出通知(请参阅下文)。We recommend defining an alert rule based on this operation event, configured to notify when the daily data limit has been reached (see below).

备注

每日上限无法以精确到指定的每日上限的级别停止数据收集,且可能出现某些多余的数据,尤其是在工作区接收大量数据的情况下。The daily cap cannot stop data collection as precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. 请参阅下文,了解有助于研究每日上限行为的查询。See below for a query that is helpful in studying the daily cap behavior.

警告

每日上限不会停止 Azure 安全中心每日每个 (节点 的 WindowsEvent、SecurityAlert、SecurityBaseline、SecurityBaselineSummary、SecurityDetection、SecurityEvent、Windows 防火墙、MaliciousIPCommunication、LinuxAuditLog、SysmonEvent、ProtectionStatus、Update 和 UpdateSummary) 包含在 Azure 安全中心中的数据类型集合,但在2017年6月19日之前安装了 Azure 安全中心的工作区除外。The daily cap does not stop the collection of data types that are included in the Azure Security Center daily per-node allowance (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus, Update and UpdateSummary), except for workspaces in which Azure Security Center was installed before June 19, 2017.

确定要定义的每日数据限制Identify what daily data limit to define

查看 Log Analytics 使用情况和预估成本,了解数据引入趋势,以及要定义的每日数据量上限。Review Log Analytics Usage and estimated costs to understand the data ingestion trend and what is the daily volume cap to define. 应该慎重考虑此上限,因为在达到限制后,将无法监视资源。It should be considered with care, since you won?t be able to monitor your resources after the limit is reached.

设置每日上限Set the Daily Cap

以下步骤说明如何配置一个限制来管理 Log Analytics 工作区每日引入的数据量。The following steps describe how to configure a limit to manage the volume of data that Log Analytics workspace will ingest per day.

  1. 在工作区的左窗格中,选择“使用情况和预估成本”。From your workspace, select Usage and estimated costs from the left pane.

  2. 在所选工作区的“使用情况和预估成本”页上,单击页面顶部的“数据上限” 。On the Usage and estimated costs page for the selected workspace, click Data Cap from the top of the page.

  3. 每日上限默认为“关闭”。Daily cap is OFF by default ? 单击“打开”将其启用,然后设置数据量限制(以 GB/天为单位)。click ON to enable it, and then set the data volume limit in GB/day.

    Log Analytics 配置数据限制

可以通过 ARM 配置每日上限,方法是在 WorkspaceCapping 下设置 dailyQuotaGb 参数,如工作区 - 创建或更新中所述。The daily cap can be configured via ARM by setting the dailyQuotaGb parameter under WorkspaceCapping as described at Workspaces - Create Or Update.

查看每日上限的效果View the effect of the Daily Cap

要查看每日上限的效果,请务必考虑每日上限中未包含的安全数据类型,以及工作区的重置时间。To view the effect of the daily cap, it's important to account for the security data types not included in the daily cap, and the reset hour for your workspace. 每日上限重置时间在“每日上限”页上可见。The daily cap reset hour is visible in the Daily Cap page. 可使用以下查询跟踪在每日上限重置之间受每日上限限制的数据量。The following query can be used to track the data volumes subject to the Daily Cap between daily cap resets. 在此示例中,工作区的重置时间为 14:00。In this example, the workspace's reset hour is 14:00. 需要为工作区更新此时间。You'll need to update this for your workspace.

let DailyCapResetHour=14;
Usage
| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
| extend TimeGenerated=datetime_add("hour",-1*DailyCapResetHour,TimeGenerated)
| where TimeGenerated > startofday(ago(31d))
| where IsBillable
| summarize IngestedGbBetweenDailyCapResets=sum(_BilledSize)/1000. by day=bin(TimeGenerated, 1d) | render areachart  

达到每日上限时发出警报Alert when Daily Cap reached

尽管在达到数据限制阈值时,Azure 门户中会显示视觉提示,但此行为不一定符合需要立即关注的操作问题的处理方式。While we present a visual cue in the Azure portal when your data limit threshold is met, this behavior doesn't necessarily align to how you manage operational issues requiring immediate attention. 若要接收警报通知,可以在 Azure Monitor 中创建一个新的警报规则。To receive an alert notification, you can create a new alert rule in Azure Monitor. 有关详细信息,请参阅如何创建、查看和管理警报To learn more, see how to create, view, and manage alerts.

若要开始操作,请参考以下针对使用 _LogOperation 函数查询 Operation 表的警报的推荐设置。To get you started, here are the recommended settings for the alert querying the Operation table using the _LogOperation function.

  • 目标:选择 Log Analytics 资源Target: Select your Log Analytics resource
  • 条件:Criteria:
    • 信号名称:自定义日志搜索Signal name: Custom log search
    • 搜索查询:_LogOperation | where Category == "Ingestion" | where Operation == "Ingestion rate" | where Level == "Warning"Search query: _LogOperation | where Category == "Ingestion" | where Operation == "Ingestion rate" | where Level == "Warning"
    • 依据:结果数Based on: Number of results
    • 条件:大于Condition: Greater than
    • 阈值:0Threshold: 0
    • 时间段:5(分钟)Period: 5 (minutes)
    • 频率:5(分钟)Frequency: 5 (minutes)
  • 警报规则名称:达到每日数据限制Alert rule name: Daily data limit reached
  • 严重性:警告(严重性 1)Severity: Warning (Sev 1)

定义警报并达到限制后,警报将会触发,并执行操作组中定义的响应。Once alert is defined and the limit is reached, an alert is triggered and performs the response defined in the Action Group. 该警报可通过电子邮件和短信通知团队,或者使用 Webhook、自动化 Runbook 或与外部 ITSM 解决方案的集成来自动执行操作。It can notify your team via email and text messages, or automate actions using webhooks, Automation runbooks or integrating with an external ITSM solution.

排查使用量超出预期的原因Troubleshooting why usage is higher than expected

使用量较高是由下面的一个或两个原因引起的:Higher usage is caused by one, or both of:

  • 将数据发送到 Log Analytics 工作区的节点数超出预期More nodes than expected sending data to Log Analytics workspace
  • 发送到 Log Analytics 工作区的数据超出预期(可能由于开始使用新的解决方案或现有解决方案发生配置更改)More data than expected being sent to Log Analytics workspace (perhaps due to starting to use a new solution or a configuration change to an existing solution)

了解发送数据的节点Understanding nodes sending data

若要了解在上个月每天报告来自代理的检测信号的节点数,请使用:To understand the number of nodes reporting heartbeats from the agent each day in the last month, use

Heartbeat 
| where TimeGenerated > startofday(ago(31d))
| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)    
| render timechart

使用以下查询获取最近 24 小时内发送数据的节点数:The get a count of nodes sending data in the last 24 hours use the query:

find where TimeGenerated > ago(24h) project Computer
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| summarize nodes = dcount(computerName)

若要获取发送任何数据的节点列表(以及由每个节点发送的数据量),可以使用以下查询:To get a list of nodes sending any data (and the amount of data sent by each) the follow query can be used:

find where TimeGenerated > ago(24h) project _BilledSize, Computer
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| summarize TotalVolumeBytes=sum(_BilledSize) by computerName

按旧版按节点定价层计费的节点Nodes billed by the legacy Per Node pricing tier

旧版按节点定价层针对具有小时粒度的节点计费,并且不会将仅发送一组安全数据类型的节点计算在内。The legacy Per Node pricing tier bills for nodes with hourly granularity and also doesn't count nodes only sending a set of security data types. 其每日节点计数将接近于以下查询:Its daily count of nodes would be close to the following query:

find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| where _IsBillable == true
| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31.  by day=bin(TimeGenerated, 1d)
| sort by day asc

帐单上的单位数以节点*月为单位,在查询中用 billableNodeMonthsPerDay 表示。The number of units on your bill is in units of node*months which is represented by billableNodeMonthsPerDay in the query. 如果工作区安装了更新管理解决方案,请将 Update 和 UpdateSummary 数据类型添加到上述查询的 where 子句中的列表中。If the workspace has the Update Management solution installed, add the Update and UpdateSummary data types to the list in the where clause in the above query. 最后,当使用解决方案目标设定时,实际的计费算法中会存在一些其他复杂性,而上述查询中并未体现这种复杂性。Finally, there is some additional complexity in the actual billing algorithm when solution targeting is used that is not represented in the above query.

提示

请谨慎使用这些 find 查询,因为跨数据类型执行扫描会占用大量资源Use these find queries sparingly as scans across data types are resource intensive to execute. 如果不需要每台计算机的结果,则基于使用情况数据类型查询(见下文)。If you do not need results per computer then query on the Usage data type (see below).

了解引入的数据量Understanding ingested data volume

在“使用情况和预估成本”页上,“单个解决方案的数据引入”图表显示发送的总数据量以及每个解决方案发送的量。On the Usage and Estimated Costs page, the Data ingestion per solution chart shows the total volume of data sent and how much is being sent by each solution. 这样就可以确定趋势,例如总数据使用量(或特定解决方案的使用量)是正在增长、保持平稳还是正在下降。This allows you to determine trends such as whether the overall data usage (or usage by a particular solution) is growing, remaining steady or decreasing.

特定事件的数据量Data volume for specific events

若要查看一组特定事件的引入数据的大小,可以查询特定的表(在此示例中为 Event),然后将查询限制为相关事件(在此示例中,事件 ID 为 5145 或 5156):To look at the size of ingested data for a particular set of events, you can query the specific table (in this example Event) and then restrict the query to the events of interest (in this example event ID 5145 or 5156):

Event
| where TimeGenerated > startofday(ago(31d)) and TimeGenerated < startofday(now()) 
| where EventID == 5145 or EventID == 5156
| where _IsBillable == true
| summarize count(), Bytes=sum(_BilledSize) by EventID, bin(TimeGenerated, 1d)

请注意,子句 where _IsBillable = true 从某些解决方案中筛选掉没有引入费用的数据类型。Note that the clause where _IsBillable = true filters out data types from certain solutions for which there is no ingestion charge. 详细了解 _IsBillableLearn more about _IsBillable.

按解决方案统计的数据量Data volume by solution

用于按解决方案查看上个月(不包括最后不完整的一天)的计费数据量的查询是:The query used to view the billable data volume by solution over the last month (excluding the last partial day) is:

Usage 
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution 
| render columnchart

带有 TimeGenerated 的子句仅用于确保 Azure 门户中的查询体验将回溯到超出默认的 24 小时。The clause with TimeGenerated is only to ensure that the query experience in the Azure portal will look back beyond the default 24 hours. 使用“使用情况”数据类型时,StartTimeEndTime 表示显示结果的时间存储桶。When using the Usage data type, StartTime and EndTime represent the time buckets for which results are presented.

按类型的数据量Data volume by type

可以进一步钻取,按数据类型查看数据趋势:You can drill in further to see data trends for by data type:

Usage 
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType 
| render columnchart

或者按解决方案和类型查看上个月的表,Or to see a table by solution and type for the last month,

Usage 
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) by Solution, DataType
| sort by Solution asc, DataType asc

按计算机的数据量Data volume by computer

Usage 数据类型不包括计算机级别的信息。The Usage data type does not include information at the computer level. 若要查看每台计算机引入数据的大小,请使用 _BilledSize 属性(以字节为单位提供大小):To see the size of ingested data per computer, use the _BilledSize property, which provides the size in bytes:

find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer
| where _IsBillable == true 
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| summarize BillableDataBytes = sum(_BilledSize) by  computerName 
| sort by BillableDataBytes nulls last

_IsBillable 属性指定引入的数据是否会产生费用。The _IsBillable property specifies whether the ingested data will incur charges.

若要查看每台计算机引入的计费事件数,请使用:To see the count of billable events ingested per computer, use

find where TimeGenerated > ago(24h) project _IsBillable, Computer
| where _IsBillable == true 
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| summarize eventCount = count() by computerName  
| sort by eventCount nulls last

提示

请谨慎使用这些 find 查询,因为跨数据类型执行扫描会占用大量资源Use these find queries sparingly as scans across data types are resource intensive to execute. 如果不需要每台计算机的结果,则基于使用情况数据类型查询。If you do not need results per computer then query on the Usage data type.

按 Azure 资源、资源组或订阅的数据量Data volume by Azure resource, resource group, or subscription

对于在 Azure 托管节点中的数据,可以获取每台计算机的引入数据的大小,请使用 _ResourceId 属性,它提供资源的完整路径:For data from nodes hosted in Azure you can get the size of ingested data per computer, use the _ResourceId property, which provides the full path to the resource:

find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true 
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId | sort by BillableDataBytes nulls last

对于 Azure 中托管的节点的数据,可以按 Azure 订阅获取引入的数据的大小,请按如下所示来使用 _SubscriptionId 属性:For data from nodes hosted in Azure you can get the size of ingested data per Azure subscription, get use the _SubscriptionId property as:

find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true 
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
| summarize BillableDataBytes = sum(BillableDataBytes) by _SubscriptionId | sort by BillableDataBytes nulls last

若要按资源组获取数据量,可以分析 _ResourceIdTo get data volume by resource group, you can parse _ResourceId:

find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true 
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
| extend resourceGroup = tostring(split(_ResourceId, "/")[4] )
| summarize BillableDataBytes = sum(BillableDataBytes) by resourceGroup | sort by BillableDataBytes nulls last

如果需要,还可以使用以下内容更完全地解析 _ResourceIdYou can also parse the _ResourceId more fully if needed as well using

| parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/" 
    resourceGroup "/providers/" provider "/" resourceType "/" resourceName   

提示

请谨慎使用这些 find 查询,因为跨数据类型执行扫描会占用大量资源Use these find queries sparingly as scans across data types are resource intensive to execute. 如果你不需要每个订阅、资源组或资源名称的结果,则基于使用情况数据类型查询。If you do not need results per subscription, resouce group or resource name, then query on the Usage data type.

警告

使用情况数据类型的某些字段虽然仍在架构中,但已弃用,其值将不再填充。Some of the fields of the Usage data type, while still in the schema, have been deprecated and will their values are no longer populated. 这些是 计算机 以及与引入相关的字段(TotalBatchesBatchesWithinSlaBatchesOutsideSlaBatchesCappedAverageProcessingTimeMs)。These are Computer as well as fields related to ingestion (TotalBatches, BatchesWithinSla, BatchesOutsideSla, BatchesCapped and AverageProcessingTimeMs.

针对常见数据类型进行查询Querying for common data types

若要更深入地了解特定数据类型的数据源,请使用下面这些有用的示例查询:To dig deeper into the source of data for a particular data type, here are some useful example queries:

  • 基于工作区的 Application Insights 资源Workspace-based Application Insights resources
  • “安全”解决方案Security solution
    • SecurityEvent | summarize AggregatedValue = count() by EventID
  • “日志管理”解决方案Log Management solution
    • Usage | where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | summarize AggregatedValue = count() by DataType
  • “性能”数据类型Perf data type
    • Perf | summarize AggregatedValue = count() by CounterPath
    • Perf | summarize AggregatedValue = count() by CounterName
  • “事件”数据类型Event data type
    • Event | summarize AggregatedValue = count() by EventID
    • Event | summarize AggregatedValue = count() by EventLog, EventLevelName
  • “Syslog”数据类型Syslog data type
    • Syslog | summarize AggregatedValue = count() by Facility, SeverityLevel
    • Syslog | summarize AggregatedValue = count() by ProcessName
  • AzureDiagnostics 数据类型AzureDiagnostics data type
    • AzureDiagnostics | summarize AggregatedValue = count() by ResourceProvider, ResourceId

有关如何减少数据量的提示Tips for reducing data volume

有关如何减少所收集日志的量的一些提示:Some suggestions for reducing the volume of logs collected include:

高数据量来源Source of high data volume 如何减少数据量How to reduce data volume
容器见解Container Insights 配置容器见解,仅收集你需要的数据。Configure Container Insights to collect only the data you required.
安全性事件Security events 选择通用或最低安全性事件Select common or minimal security events
更改安全审核策略,只收集所需事件。Change the security audit policy to collect only needed events. 具体而言,请查看是否需要收集以下对象的事件:In particular, review the need to collect events for
- 审核筛选平台- audit filtering platform
- 审核注册表- audit registry
- 审核文件系统- audit file system
- 审核内核对象- audit kernel object
- 审核句柄操作- audit handle manipulation
- 审核可移动存储- audit removable storage
性能计数器Performance counters 更改性能计数器配置如下:Change performance counter configuration to:
- 降低收集频率- Reduce the frequency of collection
- 减少性能计数器数- Reduce number of performance counters
事件日志Event logs 更改事件日志配置如下:Change event log configuration to:
- 减少收集的事件日志数- Reduce the number of event logs collected
- 仅收集必需的事件级别。- Collect only required event levels. 例如,不收集“信息”级别事件For example, do not collect Information level events
SyslogSyslog 更改 syslog 配置如下:Change syslog configuration to:
- 减少收集的设施数- Reduce the number of facilities collected
- 仅收集必需的事件级别。- Collect only required event levels. 例如,不收集“信息”和“调试”级别事件 For example, do not collect Info and Debug level events
AzureDiagnosticsAzureDiagnostics 更改资源日志集合,以便:Change resource log collection to:
- 减少向 Log Analytics 发送日志的资源数目- Reduce the number of resources send logs to Log Analytics
- 仅收集必需的日志- Collect only required logs
不需解决方案的计算机中的解决方案数据Solution data from computers that don't need the solution 使用解决方案目标,只从必需的计算机组收集数据。Use solution targeting to collect data from only required groups of computers.

获取按节点定价层中的计费节点Getting nodes as billed in the Per Node pricing tier

若要获取将按节点计费的计算机列表(如果工作区位于旧版按节点定价层中),请查找发送计费数据类型(某些数据类型是免费的)的节点。To get a list of computers which will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes which are sending billed data types (some data types are free). 为此,请使用 _IsBillable 属性,并使用完全限定的域名最左边的字段。To do this, use the _IsBillable property and use the leftmost field of the fully qualified domain name. 这将返回每小时(这是对节点进行计数和计费的粒度)具有计费数据的计算机数:This returns the count of computers with billed data per hour (which is the granularity at which nodes are counted and billed):

find where TimeGenerated > ago(24h) project Computer, TimeGenerated
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| summarize billableNodes=dcount(computerName) by bin(TimeGenerated, 1h) | sort by TimeGenerated asc

获取安全和自动化节点计数Getting Security and Automation node counts

如果你位于“按节点(OMS)”定价层,则根据所用节点和解决方案数收费,需付费的 Insights and Analytics 节点数将显示在“使用情况和预估成本”页的表中。If you are on "Per node (OMS)" pricing tier, then you are charged based on the number of nodes and solutions you use, the number of Insights and Analytics nodes for which you are being billed will be shown in table on the Usage and Estimated Cost page.

若要查看不同安全节点的数目,可以使用以下查询:To see the number of distinct Security nodes, you can use the query:

union
(
    Heartbeat
    | where (Solutions has 'security' or Solutions has 'antimalware' or Solutions has 'securitycenter')
    | project Computer
),
(
    ProtectionStatus
    | where Computer !in~
    (
        (
            Heartbeat
            | project Computer
        )
    )
    | project Computer
)
| distinct Computer
| project lowComputer = tolower(Computer)
| distinct lowComputer
| count

若要查看不同自动化节点的数目,请使用以下查询:To see the number of distinct Automation nodes, use the query:

 ConfigurationData 
 | where (ConfigDataType == "WindowsServices" or ConfigDataType == "Software" or ConfigDataType =="Daemons") 
 | extend lowComputer = tolower(Computer) | summarize by lowComputer 
 | join (
     Heartbeat 
       | where SCAgentChannel == "Direct"
       | extend lowComputer = tolower(Computer) | summarize by lowComputer, ComputerEnvironment
 ) on lowComputer
 | summarize count() by ComputerEnvironment | sort by ComputerEnvironment asc

评估旧版按节点定价层Evaluating the legacy Per Node pricing tier

对于能够访问旧版按节点定价层的工作区,是留在该层更好,还是在当前的即用即付或产能预留层更好,客户通常很难评估这一决策。 The decision of whether workspaces with access to the legacy Per Node pricing tier are better off in that tier or in a current Pay-As-You-Go or Capacity Reservation tier is often difficult for customers to assess. 这涉及了解按节点定价层中每个受监视节点的固定成本与其包括的每天每个节点 500 MB 数据分配之间的权衡,以及只需支付即用即付(每 GB)层中的引入数据的成本。This involves understanding the trade-off between the fixed cost per monitored node in the Per Node pricing tier and its included data allocation of 500 MB/node/day and the cost of just paying for ingested data in the Pay-As-You-Go (Per GB) tier.

为了促进此评估,可以使用以下查询根据工作区的使用情况模式提出最佳定价层的建议。To facilitate this assessment, the following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. 此查询可查看最近 7 天内受监视的节点和引入到工作区中的数据,并每天评估哪个定价层最佳。This query looks at the monitored nodes and data ingested into a workspace in the last 7 days, and for each day evaluates which pricing tier would have been optimal. 若要使用查询,需要To use the query, you need to specify

  1. 指定工作区是否通过将 workspaceHasSecurityCenter 设置为 truefalse 来使用 Azure 安全中心,whether the workspace is using Azure Security Center by setting workspaceHasSecurityCenter to true or false,
  2. 更新价格(如果有特定折扣)以及update the prices if you have specific discounts, and
  3. 通过设置 daysToEvaluate 指定要回溯和分析的天数。specify the number of days to look back and analyze by setting daysToEvaluate. 如果查询花费太长时间尝试查看 7 天的数据,则这一点很有用。This is useful if the query is taking too long trying to look at 7 days of data.

以下是定价层建议查询:Here is the pricing tier recommendation query:

// Set these parameters before running query
let workspaceHasSecurityCenter = true;  // Specify if the workspace has Azure Security Center
let PerNodePrice = 15.; // Enter your montly price per monitored nodes
let PerNodeOveragePrice = 2.30; // Enter your price per GB for data overage in the Per Node pricing tier
let PerGBPrice = 2.30; // Enter your price per GB in the Pay-as-you-go pricing tier
let daysToEvaluate = 7; // Enter number of previous days look at (reduce if the query is taking too long)
// ---------------------------------------
let SecurityDataTypes=dynamic(["SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent", "Update", "UpdateSummary"]);
let StartDate = startofday(datetime_add("Day",-1*daysToEvaluate,now()));
let EndDate = startofday(now());
union * 
| where TimeGenerated >= StartDate and TimeGenerated < EndDate
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| summarize nodesPerHour = dcount(computerName) by bin(TimeGenerated, 1h)  
| summarize nodesPerDay = sum(nodesPerHour)/24.  by day=bin(TimeGenerated, 1d)  
| join kind=leftouter (
    Heartbeat 
    | where TimeGenerated >= StartDate and TimeGenerated < EndDate
    | where Computer != ""
    | summarize ASCnodesPerHour = dcount(Computer) by bin(TimeGenerated, 1h) 
    | extend ASCnodesPerHour = iff(workspaceHasSecurityCenter, ASCnodesPerHour, 0)
    | summarize ASCnodesPerDay = sum(ASCnodesPerHour)/24.  by day=bin(TimeGenerated, 1d)   
) on day
| join (
    Usage 
    | where TimeGenerated >= StartDate and TimeGenerated < EndDate
    | where IsBillable == true
    | extend NonSecurityData = iff(DataType !in (SecurityDataTypes), Quantity, 0.)
    | extend SecurityData = iff(DataType in (SecurityDataTypes), Quantity, 0.)
    | summarize DataGB=sum(Quantity)/1000., NonSecurityDataGB=sum(NonSecurityData)/1000., SecurityDataGB=sum(SecurityData)/1000. by day=bin(StartTime, 1d)  
) on day
| extend AvgGbPerNode =  NonSecurityDataGB / nodesPerDay
| extend PerGBDailyCost = iff(workspaceHasSecurityCenter,
             (NonSecurityDataGB + max_of(SecurityDataGB - 0.5*ASCnodesPerDay, 0.)) * PerGBPrice,
             DataGB * PerGBPrice)
| extend OverageGB = iff(workspaceHasSecurityCenter, 
             max_of(DataGB - 0.5*nodesPerDay - 0.5*ASCnodesPerDay, 0.), 
             max_of(DataGB - 0.5*nodesPerDay, 0.))
| extend PerNodeDailyCost = nodesPerDay * PerNodePrice / 31. + OverageGB * PerNodeOveragePrice
| extend Recommendation = iff(PerNodeDailyCost < PerGBDailyCost, "Per Node tier", 
             iff(NonSecurityDataGB > 85., "Capacity Reservation tier", "Pay-as-you-go (Per GB) tier"))
| project day, nodesPerDay, ASCnodesPerDay, NonSecurityDataGB, SecurityDataGB, OverageGB, AvgGbPerNode, PerGBDailyCost, PerNodeDailyCost, Recommendation | sort by day asc
//| project day, Recommendation // Comment this line to see details
| sort by day asc

此查询不是使用情况计算方式的精确复制,但在大多数情况下可用于提供定价层建议。This query is not an exact replication of how usage is calculated, but will work for providing pricing tier recommendations in most cases.

备注

若要使用通过购买用于 System Center 的 OMS E1 套件、OMS E2 套件或 OMS 附加产品所获得的权利,请选择 Log Analytics 的“按节点”定价层。To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite or OMS Add-On for System Center, choose the Log Analytics Per Node pricing tier.

当数据收集量很高时创建警报Create an alert when data collection is high

本部分介绍如何使用 Azure Monitor 日志警报创建过去 24 小时内数据量超过指定数量的警报。This section describes how to create an alert the data volume in the last 24 hours exceeded a specified amount, using Azure Monitor Log Alerts.

若要在过去 24 小时内引入的可计费数据量大于 50 GB 时发出警报,请执行以下步骤:To alert if the billable data volume ingested in the last 24 hours was greater than 50 GB, follow these steps:

  • 定义警报条件 将 Log Analytics 工作区指定为资源目标。Define alert condition specify your Log Analytics workspace as the resource target.
  • 警报条件 指定下列项:Alert criteria specify the following:
    • 信号名称 选择“自定义日志搜索”。Signal Name select Custom log search
    • 搜索对 Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000.) | where DataGB > 50 的查询。Search query to Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000.) | where DataGB > 50. 如果需要不同的内容If you want a different
    • 警报逻辑基于结果数,条件大于阈值 0 Alert logic is Based on number of results and Condition is Greater than a Threshold of 0
    • “时间段”为 1440 分钟,并将“警报频率”设为每 1440 分钟,以便每天运行一次。Time period of 1440 minutes and Alert frequency to every 1440 minutes to run once a day.
  • 定义警报详细信息 指定以下项:Define alert details specify the following:
    • 将“名称”设置为“24 小时内的可计费数据量大于 50 GB”Name to Billable data volume greater than 50 GB in 24 hours
    • 将“严重性”设置为“警告”Severity to Warning

指定现有的操作组或创建一个新操作组,以便当日志警报匹配条件时,你会收到通知。Specify an existing or create a new Action Group so that when the log alert matches criteria, you are notified.

收到警报后,请使用上述部分中有关如何排查使用量超出预期的原因的步骤。When you receive an alert, use the steps in the above sections about how to troubleshoot why usage is higher than expected.

使用 Log Analytics 的数据传输费用Data transfer charges using Log Analytics

将数据发送到 Log Analytics 可能会导致数据带宽收费,但其限制为 Log Analytics 在使用诊断设置时或与 Azure Sentinel 内置的其他连接器一起使用的虚拟机。Sending data to Log Analytics might incur data bandwidth charges, however that its limited to Virtual Machines where an Log Analytics agent is installed and doesn't apply when using Diagnostics settings or with other connectors that are built into Azure Sentinel. Azure 带宽定价页中所述,位于两个区域内的 Azure 服务之间的数据传输按出站数据传输以正常费率计费。As described in the Azure Bandwidth pricing page, data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. 入站数据传输是免费的。Inbound data transfer is free. 但是,与 Log Analytics 数据引入的成本相比,这项费用微不足道(几个百分点)。However, this charge is very small (few %) compared to the costs for Log Analytics data ingestion. 因此,控制 Log Analytics 的成本需要专注于引入的数据量Consequently controlling costs for Log Analytics needs to focus on your ingested data volume.

排查 Log Analytics 不再收集数据的原因Troubleshooting why Log Analytics is no longer collecting data

如果采用的是旧版免费定价层并且某天已发送的数据超过 500 MB,则该天的剩余时间会停止数据收集。If you are on the legacy Free pricing tier and have sent more than 500 MB of data in a day, data collection stops for the rest of the day. 达到每日限制是 Log Analytics 停止数据收集或者看起来缺少数据的常见原因。Reaching the daily limit is a common reason that Log Analytics stops collecting data, or data appears to be missing. 在数据收集启动和停止时,Log Analytics 会创建一个类型为“操作”的事件。Log Analytics creates an event of type Operation when data collection starts and stops. 请在搜索中运行以下查询来检查是否已达到每日限制并缺少数据:Run the following query in search to check if you are reaching the daily limit and missing data:

Operation | where OperationCategory == 'Data Collection Status'

当数据收集停止时,OperationStatus 为“警告”。When data collection stops, the OperationStatus is Warning. 当数据收集启动时,OperationStatus 为“成功”。When data collection starts, the OperationStatus is Succeeded. 下表描述了数据收集停止的原因以及用于恢复数据收集的建议操作:The following table describes reasons that data collection stops and a suggested action to resume data collection:

停止收集的原因Reason collection stops 解决方案Solution
达到了工作区的每日上限Daily cap of your workspace was reached 等到收集自动重启,或者根据“管理每日最大数据量”中所述提高每日数据量限制。Wait for collection to automatically restart, or increase the daily data volume limit described in manage the maximum daily data volume. 每日上限重置时间显示在“每日上限”页上。The daily cap reset time is shows on the Daily Cap page.
你的工作区已达到数据引入量速率Your workspace has hit the Data Ingestion Volume Rate 在每个工作区中,使用诊断设置从 Azure 资源发送的数据的默认引入量速率上限约为每分钟 6 GB。The default ingestion volume rate limit for data sent from Azure resources using diagnostic settings is approximately 6 GB/min per workspace. 这是一个近似值,因为实际大小在数据类型之间可能会有所不同,具体取决于日志长度及其压缩率。This is an approximate value since the actual size can vary between data types depending on the log length and its compression ratio. 此限制不适用于从代理或数据收集器 API 发送的数据。This limit does not apply to data that is sent from agents or Data Collector API. 如果以更高速率将数据发送到单个工作区,则某些数据将丢弃,并且在继续超过阈值的情况下,每 6 小时将向工作区中的“操作”表发送一个事件。If you send data at a higher rate to a single workspace, some data is dropped, and an event is sent to the Operation table in your workspace every 6 hours while the threshold continues to be exceeded. 如果引入量继续超过速率限制,或者希望很快达到该限制,则可以通过向 LAIngestionRate@microsoft.com 发送电子邮件或提交支持请求来请求增加工作区。If your ingestion volume continues to exceed the rate limit or you are expecting to reach it sometime soon, you can request an increase to your workspace by sending an email to LAIngestionRate@microsoft.com or opening a support request. 可通过查询 Operation | where OperationCategory == "Ingestion" | where Detail startswith "The rate of data crossed the threshold" 来查找指示数据引入速率限制的事件。The event to look for that indicates a data ingestion rate limit can be found by the query Operation | where OperationCategory == "Ingestion" | where Detail startswith "The rate of data crossed the threshold".
达到旧版免费定价层的每日限制Daily limit of legacy Free pricing tier reached 等到下一天收集自动重启,或者更改为付费定价层。Wait until the following day for collection to automatically restart, or change to a paid pricing tier.
Azure 订阅由于以下原因处于挂起状态:Azure subscription is in a suspended state due to:
免费试用已结束Free trial ended
Azure 许可已过期Azure pass expired
已达到每月支出限制(例如,在 MSDN 或 Visual Studio 订阅上)Monthly spending limit reached (for example on an MSDN or Visual Studio subscription)
转换为付费订阅Convert to a paid subscription
删除限制,或者等到限制重置Remove limit, or wait until limit resets

若要在数据收集停止时收到通知,请使用“创建每日数据上限”警报中所述的步骤,以便在数据收集停止时收到通知。To be notified when data collection stops, use the steps described in Create daily data cap alert to be notified when data collection stops. 使用创建操作组中所述的步骤,为警报规则配置电子邮件、Webhook 或 Runbook 操作。Use the steps described in create an action group to configure an e-mail, webhook, or runbook action for the alert rule.

限制摘要Limits summary

还有一些其他 Log Analytics 限制,其中一些限制依赖于 Log Analytics 定价层。There are some additional Log Analytics limits, some of which depend on the Log Analytics pricing tier. Azure 订阅和服务限制、配额和约束中记录了这些限制。These are documented at Azure subscription and service limits, quotas, and constraints.

后续步骤Next steps