您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

使用服务总线消息传递改进性能的最佳实践Best Practices for performance improvements using Service Bus Messaging

本文介绍如何使用 Azure 服务总线在交换中转消息时优化性能。This article describes how to use Azure Service Bus to optimize performance when exchanging brokered messages. 本文的第一部分介绍有助于提高性能的各种不同机制。The first part of this article describes the different mechanisms that are offered to help increase performance. 第二部分指导用户如何针对给定方案以能够提供最佳性能的方式使用服务总线。The second part provides guidance on how to use Service Bus in a way that can offer the best performance in a given scenario.

在本文中,术语“客户端”是指任何访问服务总线的实体。Throughout this article, the term "client" refers to any entity that accesses Service Bus. 客户端可以充当发送方或接收方的角色。A client can take the role of a sender or a receiver. 术语“发件人”用于将消息发送到服务总线队列或主题订阅的服务总线队列或主题客户端。The term "sender" is used for a Service Bus queue or topic client that sends messages to a Service Bus queue or topic subscription. 术语“接收方”是指从服务总线队列或订阅接收消息的服务总线队列或订阅客户端。The term "receiver" refers to a Service Bus queue or subscription client that receives messages from a Service Bus queue or subscription.

以下几部分介绍服务总线用以帮助提高性能的几个概念。These sections introduce several concepts that Service Bus uses to help boost performance.

协议Protocols

服务总线支持客户端通过以下三种协议之一发送和接收消息:Service Bus enables clients to send and receive messages via one of three protocols:

  1. 高级消息队列协议 (AMQP)Advanced Message Queuing Protocol (AMQP)
  2. 服务总线邮件协议 (SBMP)Service Bus Messaging Protocol (SBMP)
  3. HTTPHTTP

AMQP 和 SBMP 更有效,因为只要消息工厂存在,它们便会维护与服务总线的连接。AMQP and SBMP are more efficient, because they maintain the connection to Service Bus as long as the messaging factory exists. 它还实现批处理和预提取。It also implements batching and prefetching. 除非明确提到,本文中的所有内容都假定使用 AMQP 或 SBMP。Unless explicitly mentioned, all content in this article assumes the use of AMQP or SBMP.

重用工厂和客户端Reusing factories and clients

QueueClientMessageSender 等服务总线客户端对象是通过 MessagingFactory 对象创建的,该对象还提供连接的内部管理。Service Bus client objects, such as QueueClient or MessageSender, are created through a MessagingFactory object, which also provides internal management of connections. 发送消息后,建议不关闭消息工厂或队列、主题和订阅客户端,并在发送下一条消息时再重新创建它们。It is recommended that you do not close messaging factories or queue, topic, and subscription clients after you send a message, and then re-create them when you send the next message. 关闭消息工厂将删除与服务总线服务的连接,并且会在重新创建工厂时建立新的连接。Closing a messaging factory deletes the connection to the Service Bus service, and a new connection is established when recreating the factory. 建立连接是一项成本高昂的操作,可通过针对多个操作重复使用相同的工厂和客户端对象来避免这一操作。Establishing a connection is an expensive operation that you can avoid by reusing the same factory and client objects for multiple operations. 这些客户端对象可安全地用于并发异步操作及从多个线程安全地使用。You can safely use these client objects for concurrent asynchronous operations and from multiple threads.

并发操作Concurrent operations

执行某项操作(发送、接收、删除等)需要花费一定时间。Performing an operation (send, receive, delete, etc.) takes some time. 这一时间包括服务总线服务处理该操作的时间,外加延迟处理请求和答复的时间。This time includes the processing of the operation by the Service Bus service in addition to the latency of the request and the reply. 若要增加每次操作的数目,操作必须同时执行。To increase the number of operations per time, operations must execute concurrently.

客户端通过执行异步操作来计划并发操作。The client schedules concurrent operations by performing asynchronous operations. 前一个请求完成之前便启动下一个请求。The next request is started before the previous request is completed. 以下代码片段是异步发送操作的示例:The following code snippet is an example of an asynchronous send operation:

 Message m1 = new BrokeredMessage(body);
 Message m2 = new BrokeredMessage(body);
 
 Task send1 = queueClient.SendAsync(m1).ContinueWith((t) => 
   {
     Console.WriteLine("Sent message #1");
   });
 Task send2 = queueClient.SendAsync(m2).ContinueWith((t) => 
   {
     Console.WriteLine("Sent message #2");
   });
 Task.WaitAll(send1, send2);
 Console.WriteLine("All messages sent");

以下代码是异步接收操作的示例。The following code is an example of an asynchronous receive operation. 请在此处查看完整程序:See the full program here:

var receiver = new MessageReceiver(connectionString, queueName, ReceiveMode.PeekLock);
var doneReceiving = new TaskCompletionSource<bool>();

receiver.RegisterMessageHandler(...);

接收模式Receive mode

创建队列或订阅客户端时,可以指定接收模式:Peek-lockReceive and DeleteWhen creating a queue or subscription client, you can specify a receive mode: Peek-lock or Receive and Delete. 默认接收模式是 PeekLockThe default receive mode is PeekLock. 在此模式下操作时,客户端发送请求以从服务总线接收消息。When operating in this mode, the client sends a request to receive a message from Service Bus. 客户端收到消息后,将发送完成消息的请求。After the client has received the message, it sends a request to complete the message.

如果将接收模式设置为 ReceiveAndDelete 时,这两个步骤将合并到单个请求中。When setting the receive mode to ReceiveAndDelete, both steps are combined in a single request. 这些步骤减少了操作的总体数目,并可以提高总消息吞吐量。These steps reduce the overall number of operations, and can improve the overall message throughput. 性能提高的同时也出现丢失消息的风险。This performance gain comes at the risk of losing messages.

服务总线不支持“接收与删除”操作的事务。Service Bus does not support transactions for receive-and-delete operations. 此外,在客户端想要延迟消息或将其放入死信队列的情况下,需要使用扫视-锁定语义。In addition, peek-lock semantics are required for any scenarios in which the client wants to defer or dead-letter a message.

客户端批处理Client-side batching

客户端批处理允许队列或主题客户端延迟一段时间发送消息。Client-side batching enables a queue or topic client to delay the sending of a message for a certain period of time. 如果客户端在这段时间内发送其他消息,则会将这些消息以单个批次传送。If the client sends additional messages during this time period, it transmits the messages in a single batch. 客户端批处理还会导致队列或订阅客户端将多个完成请求批处理为单个请求。Client-side batching also causes a queue or subscription client to batch multiple Complete requests into a single request. 批处理仅适用于异步发送完成操作。Batching is only available for asynchronous Send and Complete operations. 同步操作会立即发送到服务总线服务。Synchronous operations are immediately sent to the Service Bus service. 不会针对扫视或接收操作执行批处理,也不会跨客户端执行批处理。Batching does not occur for peek or receive operations, nor does batching occur across clients.

默认情况下,客户端的批处理间隔时间为 20 毫秒。By default, a client uses a batch interval of 20 ms. 可通过在创建消息工厂之前,设置 BatchFlushInterval 属性,更改批处理的间隔时间。You can change the batch interval by setting the BatchFlushInterval property before creating the messaging factory. 此设置会影响此工厂创建的所有客户端。This setting affects all clients that are created by this factory.

要禁用批处理,请将 BatchFlushInterval 属性设置为 TimeSpan.ZeroTo disable batching, set the BatchFlushInterval property to TimeSpan.Zero. 例如:For example:

MessagingFactorySettings mfs = new MessagingFactorySettings();
mfs.TokenProvider = tokenProvider;
mfs.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.FromSeconds(0.05);
MessagingFactory messagingFactory = MessagingFactory.Create(namespaceUri, mfs);

批处理不会影响可计费的消息操作的数目,且仅适用于使用 Microsoft.ServiceBus.Messaging 库的服务总线客户端协议。Batching does not affect the number of billable messaging operations, and is available only for the Service Bus client protocol using the Microsoft.ServiceBus.Messaging library. HTTP 协议不支持批处理。The HTTP protocol does not support batching.

备注

设置 BatchFlushInterval 可确保批处理从应用程序的角度来看是隐式的。Setting BatchFlushInterval ensures that the batching is implicit from the application's perspective. 例如,应用程序进行 SendAsync() 和 CompleteAsync() 调用而不进行具体的批量调用。i.e. The application makes SendAsync() and CompleteAsync() calls and does not make specific Batch calls.

显式客户端批处理可以通过利用下述方法调用来实现 -Explicit client side batching can be implemented by utilizing the below method call -

Task SendBatchAsync (IEnumerable<BrokeredMessage> messages);

在这里,消息的组合大小必须小于定价层支持的最大大小。Here the combined size of the messages must be less than the maximum size supported by the pricing tier.

批处理存储访问Batching store access

为了增加队列、主题或订阅的吞吐量,服务总线在写入其内部存储时会对多条消息进行批处理。To increase the throughput of a queue, topic, or subscription, Service Bus batches multiple messages when it writes to its internal store. 如果在队列或主题中启用,则会将消息批量写入存储区。If enabled on a queue or topic, writing messages into the store will be batched. 如果在队列或订阅中启用,则会从存储区批量删除消息。If enabled on a queue or subscription, deleting messages from the store will be batched. 如果对实体启用了批量存储访问,服务总线会将有关此实体的存储写入操作延迟多达 20 毫秒的时间。If batched store access is enabled for an entity, Service Bus delays a store write operation regarding that entity by up to 20 ms.

备注

使用批处理不存在丢失消息的风险,即使在 20 毫秒的批处理间隔结束时出现服务总线故障,也是如此。There is no risk of losing messages with batching, even if there is a Service Bus failure at the end of a 20ms batching interval.

在此间隔期间发生的其他存储操作会被添加到此批中。Additional store operations that occur during this interval are added to the batch. 批量存储访问仅影响发送完成操作;接收操作不会受到影响。Batched store access only affects Send and Complete operations; receive operations are not affected. 批量存储访问是实体上的一个属性。Batched store access is a property on an entity. 将跨所有启用了批量存储访问的实体实施批处理。Batching occurs across all entities that enable batched store access.

在创建新队列、主题或订阅时,默认情况下启用批量存储访问。When creating a new queue, topic or subscription, batched store access is enabled by default. 要禁用批量存储访问,请在创建实体之前将 EnableBatchedOperations 属性设置为 falseTo disable batched store access, set the EnableBatchedOperations property to false before creating the entity. 例如:For example:

QueueDescription qd = new QueueDescription();
qd.EnableBatchedOperations = false;
Queue q = namespaceManager.CreateQueue(qd);

批量存储访问不会影响计费的消息传递操作数,它是队列、主题或订阅的属性。Batched store access does not affect the number of billable messaging operations, and is a property of a queue, topic, or subscription. 它不依赖于接收模式以及客户端和服务总线服务之间所使用的协议。It is independent of the receive mode and the protocol that is used between a client and the Service Bus service.

预提取Prefetching

预提取允许队列或订阅客户端在执行接收操作时从服务加载其他消息。Prefetching enables the queue or subscription client to load additional messages from the service when it performs a receive operation. 客户端将这些消息存储在本地缓存中。The client stores these messages in a local cache. 缓存大小由 QueueClient.PrefetchCount 属性或 SubscriptionClient.PrefetchCount 属性决定。The size of the cache is determined by the QueueClient.PrefetchCount or SubscriptionClient.PrefetchCount properties. 启用预提取的每个客户端维护其自己的缓存。Each client that enables prefetching maintains its own cache. 客户端之间不共享缓存。A cache is not shared across clients. 如果客户端启动接收操作,而其缓存是空的,则服务会传输一批消息。If the client initiates a receive operation and its cache is empty, the service transmits a batch of messages. 批的大小等于缓存的大小或 256 KB,以二者中较小者为准。The size of the batch equals the size of the cache or 256 KB, whichever is smaller. 如果客户端启动接收操作,而其缓存中包含一个消息,则将从缓存中取出消息。If the client initiates a receive operation and the cache contains a message, the message is taken from the cache.

预提取一条消息后,服务将锁定此预提取的消息。When a message is prefetched, the service locks the prefetched message. 通过此锁定操作,其他接收方则无法接收到此预提取的消息。With the lock, the prefetched message cannot be received by a different receiver. 如果接收方在锁定过期之前无法完成此消息,则该消息便对其他接收方可用。If the receiver cannot complete the message before the lock expires, the message becomes available to other receivers. 预提取的消息的副本则保留在缓存中。The prefetched copy of the message remains in the cache. 使用过期的缓存副本的接收方会在尝试完成该消息时接收到一个异常。The receiver that consumes the expired cached copy will receive an exception when it tries to complete that message. 默认情况下,消息锁定在 60 秒后过期。By default, the message lock expires after 60 seconds. 这一值可延长到 5 分钟。This value can be extended to 5 minutes. 若要阻止过期消息的使用,缓存大小应始终小于客户端可在锁定超时间隔内使用的消息数。To prevent the consumption of expired messages, the cache size should always be smaller than the number of messages that can be consumed by a client within the lock time-out interval.

使用 60 秒的默认锁定时限时,PrefetchCount 的合理值是工厂所有接收方最大处理速率的 20 倍。When using the default lock expiration of 60 seconds, a good value for PrefetchCount is 20 times the maximum processing rates of all receivers of the factory. 例如,某个工厂创建了 3 个接收方,并且每个接收方每秒可以处理最多 10 个消息。For example, a factory creates three receivers, and each receiver can process up to 10 messages per second. 预提取计数不应超过 20 X 3 X 10 = 600。The prefetch count should not exceed 20 X 3 X 10 = 600. 默认情况下,PrefetchCount 设置为 0,这表示不会从服务中提取额外消息。By default, PrefetchCount is set to 0, which means that no additional messages are fetched from the service.

预提取消息会增加队列或订阅的总体吞吐量,因为它减少了消息操作或往返行程的总数。Prefetching messages increases the overall throughput for a queue or subscription because it reduces the overall number of message operations, or round trips. 但是,提取第一条消息会耗用更长的时间(因消息大小增加所致)。Fetching the first message, however, will take longer (due to the increased message size). 由于预提取的消息已经被客户端下载,因此预提取消息的接收速度将变快。Receiving prefetched messages will be faster because these messages have already been downloaded by the client.

服务器会在向客户端发送消息时检查消息的“生存时间 (TTL)”属性。The time-to-live (TTL) property of a message is checked by the server at the time the server sends the message to the client. 收到邮件时,客户端不检查消息的 TTL 属性。The client does not check the message’s TTL property when the message is received. 即使消息被客户端缓存时该消息的 TTL 已通过,该消息仍可被接收。Instead, the message can be received even if the message’s TTL has passed while the message was cached by the client.

预提取不会影响可计费的消息操作的数目,且仅适用于服务总线客户端协议。Prefetching does not affect the number of billable messaging operations, and is available only for the Service Bus client protocol. HTTP 协议不支持预提取。The HTTP protocol does not support prefetching. 预提取可用于同步和异步接收操作。Prefetching is available for both synchronous and asynchronous receive operations.

预提取和 ReceiveBatchPrefetching and ReceiveBatch

在一起的多个消息预提取的概念有类似语义处理一批 (ReceiveBatch) 中的消息,而有利用它们合并到一起时必须牢记一些细微差别。While the concepts of prefetching multiple messages together have similar semantics to processing messages in a batch (ReceiveBatch), there are some minor differences that must be kept in mind when leveraging these together.

预提取 (QueueClient 和 SubscriptionClient) 在客户端上进行配置 (或模式) 和 ReceiveBatch 是具有的操作 (请求-响应语义)。Prefetch is a configuration (or mode) on the client (QueueClient and SubscriptionClient) and ReceiveBatch is an operation (that has request-response semantics).

同时使用它们合并到一起,请考虑以下情况下的While using these together, consider the following cases -

  • 预提取应大于或等于应为从 ReceiveBatch 接收的消息数。Prefetch should be greater than or equal to the number of messages you are expecting to receive from ReceiveBatch.
  • 预提取可达 n/3 时间处理每秒,其中 n 是默认锁定持续时间的消息数。Prefetch can be up to n/3 times the number of messages processed per second, where n is the default lock duration.

有一些难题使用贪婪方法 (即保持预提取计数非常高),因为它意味着该消息已锁定到特定接收方。There are some challenges with having a greedy approach(i.e. keeping the prefetch count very high), because it implies that the message is locked to a particular receiver. 建议是尝试出预值提取之间上面提到的阈值并根据经验确定实际。The recommendation is to try out prefetch values between the thresholds mentioned above and empirically identify what fits.

多个队列Multiple queues

如果预期的负载不能由单个分区的队列或主题处理,则必须使用多个消息实体。If the expected load cannot be handled by a single partitioned queue or topic, you must use multiple messaging entities. 在使用多个实体时,为每个实体创建专用客户端,而不是针对所有实体使用同一个客户端。When using multiple entities, create a dedicated client for each entity, instead of using the same client for all entities.

开发和测试功能Development and testing features

服务总线具有一项专门用于开发的功能,该功能永远不应在生产配置中使用TopicDescription.EnableFilteringMessagesBeforePublishingService Bus has one feature, used specifically for development, which should never be used in production configurations: TopicDescription.EnableFilteringMessagesBeforePublishing.

向主题添加了新规则或筛选器时,可以使用 TopicDescription.EnableFilteringMessagesBeforePublishing 验证新的筛选器表达式是否按预期工作。When new rules or filters are added to the topic, you can use TopicDescription.EnableFilteringMessagesBeforePublishing to verify that the new filter expression is working as expected.

方案Scenarios

以下各节介绍典型的消息传递方案,并概述首选服务总线设置。The following sections describe typical messaging scenarios and outline the preferred Service Bus settings. 吞吐速率分为小(小于 1 条消息/秒)、中等(1 条消息/秒或更大,但不超过 100 条消息/秒)和高(100 条消息/秒或更大)。Throughput rates are classified as small (less than 1 message/second), moderate (1 message/second or greater but less than 100 messages/second) and high (100 messages/second or greater). 客户端数分为小(5 个或更少)、中等(5 个以上但小于或等于 20 个)和大(超过 20 个)。The number of clients are classified as small (5 or fewer), moderate (more than 5 but less than or equal to 20), and large (more than 20).

高吞吐量队列High-throughput queue

目标:将单个队列的吞吐量最大化。Goal: Maximize the throughput of a single queue. 发送方和接收方的数目较小。The number of senders and receivers is small.

  • 如要增加面向队列的总发送速率,则使用多个消息工厂来创建发送方。To increase the overall send rate into the queue, use multiple message factories to create senders. 为每个发送方使用异步操作或多个线程。For each sender, use asynchronous operations or multiple threads.
  • 如要增加从队列接收的总体接收速率,则使用多个消息工厂来创建接收方。To increase the overall receive rate from the queue, use multiple message factories to create receivers.
  • 使用异步操作可利用客户端批处理。Use asynchronous operations to take advantage of client-side batching.
  • 将批处理间隔时间设置为 50 毫秒以减少服务总线客户端协议传输的数量。Set the batching interval to 50 ms to reduce the number of Service Bus client protocol transmissions. 如果使用多个发送方,则将批处理间隔时间增加到 100 毫秒。If multiple senders are used, increase the batching interval to 100 ms.
  • 将批量存储访问保留为启用状态。Leave batched store access enabled. 该访问会增加可将消息写入队列的总速率。This access increases the overall rate at which messages can be written into the queue.
  • 将预提取计数设置为工厂所有接收方最大处理速率的 20 倍。Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. 此计数会减少服务总线客户端协议传输的数量。This count reduces the number of Service Bus client protocol transmissions.
  • 使用分区的队列提高性能和可用性。Use a partitioned queue for improved performance and availability.

多个高吞吐量队列Multiple high-throughput queues

目标:将多个队列的整体吞吐量最大化。Goal: Maximize overall throughput of multiple queues. 单个队列的吞吐量中等或高。The throughput of an individual queue is moderate or high.

要在多个队列之间获得最大的吞吐量,则使用所述设置将单个队列的吞吐量最大化。To obtain maximum throughput across multiple queues, use the settings outlined to maximize the throughput of a single queue. 此外,使用不同工厂来创建向不同的队列发送或从其接收的客户端。In addition, use different factories to create clients that send or receive from different queues.

低延迟队列Low latency queue

目标:将队列或主题的端到端延迟时间最小化。Goal: Minimize end-to-end latency of a queue or topic. 发送方和接收方的数目较小。The number of senders and receivers is small. 队列的吞吐量较小或为中等。The throughput of the queue is small or moderate.

  • 禁用客户端批处理。Disable client-side batching. 客户端会立即发送一条消息。The client immediately sends a message.
  • 禁用批量存储访问。Disable batched store access. 该服务会立即将消息写入存储。The service immediately writes the message to the store.
  • 如果使用单个客户端,将预提取计数设置为接收方处理速率的 20 倍。If using a single client, set the prefetch count to 20 times the processing rate of the receiver. 如果多条消息同时到达队列,服务总线客户端协议会将这些消息全部同时传输。If multiple messages arrive at the queue at the same time, the Service Bus client protocol transmits them all at the same time. 当客户端收到下一条消息时,该消息便已存在于本地缓存中。When the client receives the next message, that message is already in the local cache. 缓存应较小。The cache should be small.
  • 如果使用多个客户端,则将预提取计数设置为 0。If using multiple clients, set the prefetch count to 0. 通过设置此计数,在第一个客户端仍在处理第一条消息时,第二个客户端可以接收第二条消息。By setting the count, the second client can receive the second message while the first client is still processing the first message.
  • 使用分区的队列提高性能和可用性。Use a partitioned queue for improved performance and availability.

包含大量发送方的队列Queue with a large number of senders

目标:将包含大量发送方的队列或主题的吞吐量最大化。Goal: Maximize throughput of a queue or topic with a large number of senders. 每个发送方均以中等速率发送消息。Each sender sends messages with a moderate rate. 接收方的数目较小。The number of receivers is small.

服务总线允许最多 1000 个与消息传递实体之间的并发连接(使用 AMQP 则为 5000 个)。Service Bus enables up to 1000 concurrent connections to a messaging entity (or 5000 using AMQP). 在命名空间级别强制实施此限制,并且队列/主题/订阅受每个命名空间的并发连接限制约束。This limit is enforced at the namespace level, and queues/topics/subscriptions are capped by the limit of concurrent connections per namespace. 就队列而言,此数值在发送方和接收方之间共享。For queues, this number is shared between senders and receivers. 如果发件人需要所有 1000 个连接,则将队列替换为主题和单个订阅。If all 1000 connections are required for senders, replace the queue with a topic and a single subscription. 主题接受来自发件人的最多 1000 个并发连接,而订阅则可接受来自接收方的其他 1000 个并发连接。A topic accepts up to 1000 concurrent connections from senders, whereas the subscription accepts an additional 1000 concurrent connections from receivers. 如果需要超过 1000 个并发发送方,则发送方应通过 HTTP 向服务总线协议发送消息。If more than 1000 concurrent senders are required, the senders should send messages to the Service Bus protocol via HTTP.

若要使吞吐量最大化,则执行以下步骤:To maximize throughput, perform the following steps:

  • 如果每个发送方驻留在不同进程中,则每个进程仅使用单个工厂。If each sender resides in a different process, use only a single factory per process.
  • 使用异步操作可利用客户端批处理。Use asynchronous operations to take advantage of client-side batching.
  • 使用 20 毫秒的默认批处理间隔时间以减少服务总线客户端协议传输的数量。Use the default batching interval of 20 ms to reduce the number of Service Bus client protocol transmissions.
  • 将批量存储访问保留为启用状态。Leave batched store access enabled. 此访问会增加可将消息写入队列或主题的总速率。This access increases the overall rate at which messages can be written into the queue or topic.
  • 将预提取计数设置为工厂所有接收方最大处理速率的 20 倍。Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. 此计数会减少服务总线客户端协议传输的数量。This count reduces the number of Service Bus client protocol transmissions.
  • 使用分区的队列提高性能和可用性。Use a partitioned queue for improved performance and availability.

包含大量接收方的队列Queue with a large number of receivers

目标:将包含大量接收方的队列或订阅的接收速率最大化。Goal: Maximize the receive rate of a queue or subscription with a large number of receivers. 每个接收方以中等接收速率接收消息。Each receiver receives messages at a moderate rate. 发送方的数目较小。The number of senders is small.

服务总线允许最多 1000 个与实体之间的并发连接。Service Bus enables up to 1000 concurrent connections to an entity. 如果队列需要超过 1000 个接收方,则将队列替换为主题和多个订阅。If a queue requires more than 1000 receivers, replace the queue with a topic and multiple subscriptions. 每个订阅可支持最多 1000 个并发连接。Each subscription can support up to 1000 concurrent connections. 或者,接收方可通过 HTTP 协议访问队列。Alternatively, receivers can access the queue via the HTTP protocol.

若要使吞吐量最大化,则执行以下操作:To maximize throughput, do the following:

  • 如果每个接收方驻留在不同进程中,则每个进程仅使用单个工厂。If each receiver resides in a different process, use only a single factory per process.
  • 接收方可使用同步或异步操作。Receivers can use synchronous or asynchronous operations. 如果独立接收方的接收速率给定为中等级别,则客户端对“完成”请求的批处理不会影响接收方的吞吐量。Given the moderate receive rate of an individual receiver, client-side batching of a Complete request does not affect receiver throughput.
  • 将批量存储访问保留为启用状态。Leave batched store access enabled. 此访问会减少实体的总负载。This access reduces the overall load of the entity. 这还将降低可将消息写入队列或主题的总速率。It also reduces the overall rate at which messages can be written into the queue or topic.
  • 将预提取计数设置为较小值(例如,PrefetchCount = 10)。Set the prefetch count to a small value (for example, PrefetchCount = 10). 此计数可防止接收方在其他接收方已缓存大量消息时处于闲置状态。This count prevents receivers from being idle while other receivers have large numbers of messages cached.
  • 使用分区的队列提高性能和可用性。Use a partitioned queue for improved performance and availability.

包含少量订阅的主题Topic with a small number of subscriptions

目标:将包含少量订阅的主题的吞吐量最大化。Goal: Maximize the throughput of a topic with a small number of subscriptions. 消息由多个订阅接收,这意味着对所有订阅的组合接收速率比发送速率要大得多。A message is received by many subscriptions, which means the combined receive rate over all subscriptions is larger than the send rate. 发送方的数目较小。The number of senders is small. 每个订阅的接收方的数目较小。The number of receivers per subscription is small.

若要使吞吐量最大化,则执行以下操作:To maximize throughput, do the following:

  • 如要增加面向主题的总发送速率,则使用多个消息工厂来创建发送方。To increase the overall send rate into the topic, use multiple message factories to create senders. 为每个发送方使用异步操作或多个线程。For each sender, use asynchronous operations or multiple threads.
  • 如要增加从订阅接收的总体接收速率,则使用多个消息工厂来创建接收方。To increase the overall receive rate from a subscription, use multiple message factories to create receivers. 为每个接收方使用异步操作或多个线程。For each receiver, use asynchronous operations or multiple threads.
  • 使用异步操作可利用客户端批处理。Use asynchronous operations to take advantage of client-side batching.
  • 使用 20 毫秒的默认批处理间隔时间以减少服务总线客户端协议传输的数量。Use the default batching interval of 20 ms to reduce the number of Service Bus client protocol transmissions.
  • 将批量存储访问保留为启用状态。Leave batched store access enabled. 此访问会增加可将消息写入主题的总写入速率。This access increases the overall rate at which messages can be written into the topic.
  • 将预提取计数设置为工厂所有接收方最大处理速率的 20 倍。Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. 此计数会减少服务总线客户端协议传输的数量。This count reduces the number of Service Bus client protocol transmissions.
  • 使用分区的主题提高性能和可用性。Use a partitioned topic for improved performance and availability.

包含大量订阅的主题Topic with a large number of subscriptions

目标:将包含大量订阅的主题的吞吐量最大化。Goal: Maximize the throughput of a topic with a large number of subscriptions. 消息由多个订阅接收,这意味着对所有订阅的组合接收速率比发送速率要大得多。A message is received by many subscriptions, which means the combined receive rate over all subscriptions is much larger than the send rate. 发送方的数目较小。The number of senders is small. 每个订阅的接收方的数目较小。The number of receivers per subscription is small.

如果所有消息都路由到所有订阅,具有大量订阅的主题则通常会公开低的总吞吐量。Topics with a large number of subscriptions typically expose a low overall throughput if all messages are routed to all subscriptions. 低吞吐量的原因是,每个消息被多次接收,并且一个主题中包含的所有消息以及所有订阅都存储在同一存储。This low throughput is caused by the fact that each message is received many times, and all messages that are contained in a topic and all its subscriptions are stored in the same store. 假定每个订阅的发送方数量和接收方数量很小。It is assumed that the number of senders and number of receivers per subscription is small. 服务总线支持每个主题最多 2,000 个订阅。Service Bus supports up to 2,000 subscriptions per topic.

若要使吞吐量最大化,则尝试执行以下步骤:To maximize throughput, try the following steps:

  • 使用异步操作可利用客户端批处理。Use asynchronous operations to take advantage of client-side batching.
  • 使用 20 毫秒的默认批处理间隔时间以减少服务总线客户端协议传输的数量。Use the default batching interval of 20 ms to reduce the number of Service Bus client protocol transmissions.
  • 将批量存储访问保留为启用状态。Leave batched store access enabled. 此访问会增加可将消息写入主题的总写入速率。This access increases the overall rate at which messages can be written into the topic.
  • 将预提取计数设置为预期接收速率的 20 倍(以秒为单位)。Set the prefetch count to 20 times the expected receive rate in seconds. 此计数会减少服务总线客户端协议传输的数量。This count reduces the number of Service Bus client protocol transmissions.
  • 使用分区的主题提高性能和可用性。Use a partitioned topic for improved performance and availability.

后续步骤Next steps

若要了解有关优化服务总线性能的详细信息,请参阅分区的消息实体To learn more about optimizing Service Bus performance, see Partitioned messaging entities.